00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2404 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3669 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.211 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.215 The recommended git tool is: git 00:00:00.216 using credential 00000000-0000-0000-0000-000000000002 00:00:00.217 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.234 Fetching changes from the remote Git repository 00:00:00.236 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.253 Using shallow fetch with depth 1 00:00:00.253 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.253 > git --version # timeout=10 00:00:00.269 > git --version # 'git version 2.39.2' 00:00:00.269 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.281 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.281 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.576 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.589 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.601 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.601 > git config core.sparsecheckout # timeout=10 00:00:07.616 > git read-tree -mu HEAD # timeout=10 00:00:07.634 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.688 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.689 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.811 [Pipeline] Start of Pipeline 00:00:07.826 [Pipeline] library 00:00:07.827 Loading library shm_lib@master 00:00:07.828 Library shm_lib@master is cached. Copying from home. 00:00:07.845 [Pipeline] node 00:00:07.859 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.861 [Pipeline] { 00:00:07.873 [Pipeline] catchError 00:00:07.875 [Pipeline] { 00:00:07.890 [Pipeline] wrap 00:00:07.902 [Pipeline] { 00:00:07.909 [Pipeline] stage 00:00:07.911 [Pipeline] { (Prologue) 00:00:07.929 [Pipeline] echo 00:00:07.931 Node: VM-host-SM9 00:00:07.937 [Pipeline] cleanWs 00:00:07.946 [WS-CLEANUP] Deleting project workspace... 00:00:07.946 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.951 [WS-CLEANUP] done 00:00:08.203 [Pipeline] setCustomBuildProperty 00:00:08.297 [Pipeline] httpRequest 00:00:09.280 [Pipeline] echo 00:00:09.282 Sorcerer 10.211.164.101 is alive 00:00:09.289 [Pipeline] retry 00:00:09.290 [Pipeline] { 00:00:09.300 [Pipeline] httpRequest 00:00:09.303 HttpMethod: GET 00:00:09.304 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.304 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.305 Response Code: HTTP/1.1 200 OK 00:00:09.306 Success: Status code 200 is in the accepted range: 200,404 00:00:09.306 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.534 [Pipeline] } 00:00:10.553 [Pipeline] // retry 00:00:10.562 [Pipeline] sh 00:00:10.845 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.863 [Pipeline] httpRequest 00:00:11.535 [Pipeline] echo 00:00:11.537 Sorcerer 10.211.164.101 is alive 00:00:11.547 [Pipeline] retry 00:00:11.549 [Pipeline] { 00:00:11.565 [Pipeline] httpRequest 00:00:11.570 HttpMethod: GET 00:00:11.570 URL: http://10.211.164.101/packages/spdk_2a91567e48d607d62a2d552252c20d3930f5783f.tar.gz 00:00:11.571 Sending request to url: http://10.211.164.101/packages/spdk_2a91567e48d607d62a2d552252c20d3930f5783f.tar.gz 00:00:11.590 Response Code: HTTP/1.1 200 OK 00:00:11.590 Success: Status code 200 is in the accepted range: 200,404 00:00:11.591 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_2a91567e48d607d62a2d552252c20d3930f5783f.tar.gz 00:01:44.777 [Pipeline] } 00:01:44.792 [Pipeline] // retry 00:01:44.798 [Pipeline] sh 00:01:45.073 + tar --no-same-owner -xf spdk_2a91567e48d607d62a2d552252c20d3930f5783f.tar.gz 00:01:47.617 [Pipeline] sh 00:01:47.898 + git -C spdk log --oneline -n5 00:01:47.898 2a91567e4 CHANGELOG.md: corrected typo 00:01:47.898 6c35d974e lib/nvme: destruct controllers that failed init asynchronously 00:01:47.898 414f91a0c lib/nvmf: Fix double free of connect request 00:01:47.898 d8f6e798d nvme: Fix discovery loop when target has no entry 00:01:47.898 ff2e6bfe4 lib/lvol: cluster size must be a multiple of bs_dev->blocklen 00:01:47.916 [Pipeline] withCredentials 00:01:47.927 > git --version # timeout=10 00:01:47.939 > git --version # 'git version 2.39.2' 00:01:47.954 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:47.957 [Pipeline] { 00:01:47.966 [Pipeline] retry 00:01:47.968 [Pipeline] { 00:01:47.984 [Pipeline] sh 00:01:48.264 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:48.275 [Pipeline] } 00:01:48.295 [Pipeline] // retry 00:01:48.300 [Pipeline] } 00:01:48.318 [Pipeline] // withCredentials 00:01:48.330 [Pipeline] httpRequest 00:01:48.730 [Pipeline] echo 00:01:48.732 Sorcerer 10.211.164.101 is alive 00:01:48.741 [Pipeline] retry 00:01:48.744 [Pipeline] { 00:01:48.760 [Pipeline] httpRequest 00:01:48.765 HttpMethod: GET 00:01:48.766 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:48.766 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:48.767 Response Code: HTTP/1.1 200 OK 00:01:48.768 Success: Status code 200 is in the accepted range: 200,404 00:01:48.768 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:55.012 [Pipeline] } 00:01:55.029 [Pipeline] // retry 00:01:55.036 [Pipeline] sh 00:01:55.316 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:56.803 [Pipeline] sh 00:01:57.092 + git -C dpdk log --oneline -n5 00:01:57.093 caf0f5d395 version: 22.11.4 00:01:57.093 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:57.093 dc9c799c7d vhost: fix missing spinlock unlock 00:01:57.093 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:57.093 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:57.111 [Pipeline] writeFile 00:01:57.128 [Pipeline] sh 00:01:57.411 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:57.422 [Pipeline] sh 00:01:57.702 + cat autorun-spdk.conf 00:01:57.702 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.702 SPDK_TEST_NVMF=1 00:01:57.702 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.702 SPDK_TEST_URING=1 00:01:57.702 SPDK_TEST_USDT=1 00:01:57.702 SPDK_RUN_UBSAN=1 00:01:57.702 NET_TYPE=virt 00:01:57.702 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:57.702 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:57.702 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.709 RUN_NIGHTLY=1 00:01:57.711 [Pipeline] } 00:01:57.726 [Pipeline] // stage 00:01:57.743 [Pipeline] stage 00:01:57.745 [Pipeline] { (Run VM) 00:01:57.760 [Pipeline] sh 00:01:58.046 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:58.046 + echo 'Start stage prepare_nvme.sh' 00:01:58.046 Start stage prepare_nvme.sh 00:01:58.046 + [[ -n 2 ]] 00:01:58.046 + disk_prefix=ex2 00:01:58.046 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:58.046 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:58.046 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:58.046 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:58.046 ++ SPDK_TEST_NVMF=1 00:01:58.046 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:58.046 ++ SPDK_TEST_URING=1 00:01:58.046 ++ SPDK_TEST_USDT=1 00:01:58.046 ++ SPDK_RUN_UBSAN=1 00:01:58.046 ++ NET_TYPE=virt 00:01:58.046 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:58.046 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:58.046 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:58.046 ++ RUN_NIGHTLY=1 00:01:58.046 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:58.046 + nvme_files=() 00:01:58.046 + declare -A nvme_files 00:01:58.046 + backend_dir=/var/lib/libvirt/images/backends 00:01:58.046 + nvme_files['nvme.img']=5G 00:01:58.046 + nvme_files['nvme-cmb.img']=5G 00:01:58.046 + nvme_files['nvme-multi0.img']=4G 00:01:58.046 + nvme_files['nvme-multi1.img']=4G 00:01:58.046 + nvme_files['nvme-multi2.img']=4G 00:01:58.046 + nvme_files['nvme-openstack.img']=8G 00:01:58.046 + nvme_files['nvme-zns.img']=5G 00:01:58.046 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:58.046 + (( SPDK_TEST_FTL == 1 )) 00:01:58.046 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:58.046 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:58.046 + for nvme in "${!nvme_files[@]}" 00:01:58.046 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:58.046 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:58.046 + for nvme in "${!nvme_files[@]}" 00:01:58.046 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:58.046 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:58.046 + for nvme in "${!nvme_files[@]}" 00:01:58.046 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:58.046 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:58.046 + for nvme in "${!nvme_files[@]}" 00:01:58.046 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:58.046 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:58.046 + for nvme in "${!nvme_files[@]}" 00:01:58.046 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:58.046 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:58.046 + for nvme in "${!nvme_files[@]}" 00:01:58.046 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:58.046 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:58.046 + for nvme in "${!nvme_files[@]}" 00:01:58.046 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:58.306 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:58.306 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:58.306 + echo 'End stage prepare_nvme.sh' 00:01:58.306 End stage prepare_nvme.sh 00:01:58.318 [Pipeline] sh 00:01:58.600 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:58.600 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:01:58.600 00:01:58.600 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:58.600 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:58.600 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:58.600 HELP=0 00:01:58.600 DRY_RUN=0 00:01:58.600 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:58.600 NVME_DISKS_TYPE=nvme,nvme, 00:01:58.600 NVME_AUTO_CREATE=0 00:01:58.600 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:58.600 NVME_CMB=,, 00:01:58.600 NVME_PMR=,, 00:01:58.600 NVME_ZNS=,, 00:01:58.600 NVME_MS=,, 00:01:58.600 NVME_FDP=,, 00:01:58.600 SPDK_VAGRANT_DISTRO=fedora39 00:01:58.600 SPDK_VAGRANT_VMCPU=10 00:01:58.600 SPDK_VAGRANT_VMRAM=12288 00:01:58.600 SPDK_VAGRANT_PROVIDER=libvirt 00:01:58.600 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:58.601 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:58.601 SPDK_OPENSTACK_NETWORK=0 00:01:58.601 VAGRANT_PACKAGE_BOX=0 00:01:58.601 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:58.601 FORCE_DISTRO=true 00:01:58.601 VAGRANT_BOX_VERSION= 00:01:58.601 EXTRA_VAGRANTFILES= 00:01:58.601 NIC_MODEL=e1000 00:01:58.601 00:01:58.601 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:58.601 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:01.140 Bringing machine 'default' up with 'libvirt' provider... 00:02:01.707 ==> default: Creating image (snapshot of base box volume). 00:02:01.966 ==> default: Creating domain with the following settings... 00:02:01.966 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732637247_381d511c72cc78d31782 00:02:01.966 ==> default: -- Domain type: kvm 00:02:01.966 ==> default: -- Cpus: 10 00:02:01.966 ==> default: -- Feature: acpi 00:02:01.966 ==> default: -- Feature: apic 00:02:01.966 ==> default: -- Feature: pae 00:02:01.966 ==> default: -- Memory: 12288M 00:02:01.966 ==> default: -- Memory Backing: hugepages: 00:02:01.966 ==> default: -- Management MAC: 00:02:01.966 ==> default: -- Loader: 00:02:01.966 ==> default: -- Nvram: 00:02:01.966 ==> default: -- Base box: spdk/fedora39 00:02:01.966 ==> default: -- Storage pool: default 00:02:01.966 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732637247_381d511c72cc78d31782.img (20G) 00:02:01.966 ==> default: -- Volume Cache: default 00:02:01.966 ==> default: -- Kernel: 00:02:01.966 ==> default: -- Initrd: 00:02:01.966 ==> default: -- Graphics Type: vnc 00:02:01.966 ==> default: -- Graphics Port: -1 00:02:01.966 ==> default: -- Graphics IP: 127.0.0.1 00:02:01.966 ==> default: -- Graphics Password: Not defined 00:02:01.966 ==> default: -- Video Type: cirrus 00:02:01.966 ==> default: -- Video VRAM: 9216 00:02:01.966 ==> default: -- Sound Type: 00:02:01.966 ==> default: -- Keymap: en-us 00:02:01.966 ==> default: -- TPM Path: 00:02:01.966 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:01.966 ==> default: -- Command line args: 00:02:01.966 ==> default: -> value=-device, 00:02:01.966 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:01.966 ==> default: -> value=-drive, 00:02:01.966 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:02:01.966 ==> default: -> value=-device, 00:02:01.966 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:01.966 ==> default: -> value=-device, 00:02:01.966 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:01.966 ==> default: -> value=-drive, 00:02:01.966 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:01.966 ==> default: -> value=-device, 00:02:01.966 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:01.966 ==> default: -> value=-drive, 00:02:01.966 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:01.966 ==> default: -> value=-device, 00:02:01.966 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:01.966 ==> default: -> value=-drive, 00:02:01.966 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:01.966 ==> default: -> value=-device, 00:02:01.966 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:01.966 ==> default: Creating shared folders metadata... 00:02:01.966 ==> default: Starting domain. 00:02:03.345 ==> default: Waiting for domain to get an IP address... 00:02:21.521 ==> default: Waiting for SSH to become available... 00:02:21.521 ==> default: Configuring and enabling network interfaces... 00:02:24.063 default: SSH address: 192.168.121.35:22 00:02:24.063 default: SSH username: vagrant 00:02:24.063 default: SSH auth method: private key 00:02:26.593 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:33.182 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:39.864 ==> default: Mounting SSHFS shared folder... 00:02:40.432 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:40.432 ==> default: Checking Mount.. 00:02:41.812 ==> default: Folder Successfully Mounted! 00:02:41.812 ==> default: Running provisioner: file... 00:02:42.380 default: ~/.gitconfig => .gitconfig 00:02:42.945 00:02:42.945 SUCCESS! 00:02:42.945 00:02:42.945 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:42.945 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:42.946 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:42.946 00:02:42.953 [Pipeline] } 00:02:42.963 [Pipeline] // stage 00:02:42.970 [Pipeline] dir 00:02:42.971 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:42.972 [Pipeline] { 00:02:42.984 [Pipeline] catchError 00:02:42.985 [Pipeline] { 00:02:42.997 [Pipeline] sh 00:02:43.274 + vagrant ssh-config --host vagrant 00:02:43.274 + sed -ne /^Host/,$p 00:02:43.274 + tee ssh_conf 00:02:47.458 Host vagrant 00:02:47.458 HostName 192.168.121.35 00:02:47.458 User vagrant 00:02:47.458 Port 22 00:02:47.458 UserKnownHostsFile /dev/null 00:02:47.458 StrictHostKeyChecking no 00:02:47.458 PasswordAuthentication no 00:02:47.458 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:47.458 IdentitiesOnly yes 00:02:47.458 LogLevel FATAL 00:02:47.458 ForwardAgent yes 00:02:47.458 ForwardX11 yes 00:02:47.458 00:02:47.471 [Pipeline] withEnv 00:02:47.473 [Pipeline] { 00:02:47.486 [Pipeline] sh 00:02:47.765 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:47.765 source /etc/os-release 00:02:47.765 [[ -e /image.version ]] && img=$(< /image.version) 00:02:47.765 # Minimal, systemd-like check. 00:02:47.765 if [[ -e /.dockerenv ]]; then 00:02:47.765 # Clear garbage from the node's name: 00:02:47.765 # agt-er_autotest_547-896 -> autotest_547-896 00:02:47.765 # $HOSTNAME is the actual container id 00:02:47.765 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:47.765 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:47.765 # We can assume this is a mount from a host where container is running, 00:02:47.765 # so fetch its hostname to easily identify the target swarm worker. 00:02:47.765 container="$(< /etc/hostname) ($agent)" 00:02:47.765 else 00:02:47.765 # Fallback 00:02:47.765 container=$agent 00:02:47.765 fi 00:02:47.765 fi 00:02:47.765 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:47.765 00:02:48.034 [Pipeline] } 00:02:48.051 [Pipeline] // withEnv 00:02:48.060 [Pipeline] setCustomBuildProperty 00:02:48.077 [Pipeline] stage 00:02:48.080 [Pipeline] { (Tests) 00:02:48.101 [Pipeline] sh 00:02:48.383 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:48.656 [Pipeline] sh 00:02:48.935 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:49.208 [Pipeline] timeout 00:02:49.209 Timeout set to expire in 1 hr 0 min 00:02:49.211 [Pipeline] { 00:02:49.227 [Pipeline] sh 00:02:49.510 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:50.077 HEAD is now at 2a91567e4 CHANGELOG.md: corrected typo 00:02:50.088 [Pipeline] sh 00:02:50.367 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:50.640 [Pipeline] sh 00:02:50.919 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:51.193 [Pipeline] sh 00:02:51.472 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:51.472 ++ readlink -f spdk_repo 00:02:51.730 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:51.730 + [[ -n /home/vagrant/spdk_repo ]] 00:02:51.730 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:51.730 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:51.730 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:51.730 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:51.730 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:51.730 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:51.730 + cd /home/vagrant/spdk_repo 00:02:51.730 + source /etc/os-release 00:02:51.730 ++ NAME='Fedora Linux' 00:02:51.731 ++ VERSION='39 (Cloud Edition)' 00:02:51.731 ++ ID=fedora 00:02:51.731 ++ VERSION_ID=39 00:02:51.731 ++ VERSION_CODENAME= 00:02:51.731 ++ PLATFORM_ID=platform:f39 00:02:51.731 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:51.731 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:51.731 ++ LOGO=fedora-logo-icon 00:02:51.731 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:51.731 ++ HOME_URL=https://fedoraproject.org/ 00:02:51.731 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:51.731 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:51.731 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:51.731 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:51.731 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:51.731 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:51.731 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:51.731 ++ SUPPORT_END=2024-11-12 00:02:51.731 ++ VARIANT='Cloud Edition' 00:02:51.731 ++ VARIANT_ID=cloud 00:02:51.731 + uname -a 00:02:51.731 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:51.731 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:51.989 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:51.989 Hugepages 00:02:51.989 node hugesize free / total 00:02:51.989 node0 1048576kB 0 / 0 00:02:51.989 node0 2048kB 0 / 0 00:02:51.989 00:02:51.989 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:51.989 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:52.277 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:52.277 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:52.277 + rm -f /tmp/spdk-ld-path 00:02:52.277 + source autorun-spdk.conf 00:02:52.277 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:52.277 ++ SPDK_TEST_NVMF=1 00:02:52.277 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:52.277 ++ SPDK_TEST_URING=1 00:02:52.277 ++ SPDK_TEST_USDT=1 00:02:52.277 ++ SPDK_RUN_UBSAN=1 00:02:52.277 ++ NET_TYPE=virt 00:02:52.277 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:52.277 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:52.277 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:52.277 ++ RUN_NIGHTLY=1 00:02:52.277 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:52.277 + [[ -n '' ]] 00:02:52.277 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:52.277 + for M in /var/spdk/build-*-manifest.txt 00:02:52.277 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:52.277 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:52.277 + for M in /var/spdk/build-*-manifest.txt 00:02:52.277 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:52.277 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:52.277 + for M in /var/spdk/build-*-manifest.txt 00:02:52.277 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:52.277 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:52.277 ++ uname 00:02:52.277 + [[ Linux == \L\i\n\u\x ]] 00:02:52.277 + sudo dmesg -T 00:02:52.277 + sudo dmesg --clear 00:02:52.277 + dmesg_pid=5998 00:02:52.277 + [[ Fedora Linux == FreeBSD ]] 00:02:52.277 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:52.277 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:52.277 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:52.277 + sudo dmesg -Tw 00:02:52.277 + [[ -x /usr/src/fio-static/fio ]] 00:02:52.277 + export FIO_BIN=/usr/src/fio-static/fio 00:02:52.277 + FIO_BIN=/usr/src/fio-static/fio 00:02:52.277 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:52.277 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:52.277 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:52.277 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:52.277 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:52.277 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:52.277 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:52.277 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:52.277 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:52.277 16:08:17 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:52.277 16:08:17 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:52.277 16:08:17 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:52.277 16:08:17 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:52.277 16:08:17 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:52.277 16:08:17 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:52.277 16:08:17 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:52.277 16:08:17 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:52.277 16:08:17 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:52.277 16:08:17 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:52.277 16:08:17 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:52.277 16:08:17 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:52.277 16:08:17 -- spdk_repo/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:52.277 16:08:17 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:52.277 16:08:17 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:52.544 16:08:17 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:52.544 16:08:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:52.544 16:08:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:52.544 16:08:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:52.544 16:08:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:52.544 16:08:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:52.544 16:08:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.544 16:08:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.544 16:08:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.544 16:08:17 -- paths/export.sh@5 -- $ export PATH 00:02:52.544 16:08:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.544 16:08:17 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:52.544 16:08:17 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:52.544 16:08:17 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732637297.XXXXXX 00:02:52.544 16:08:17 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732637297.WKxHUi 00:02:52.544 16:08:17 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:52.544 16:08:17 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:52.544 16:08:17 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:52.544 16:08:17 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:52.544 16:08:17 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:52.544 16:08:17 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:52.544 16:08:17 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:52.544 16:08:17 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:52.544 16:08:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.544 16:08:17 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:52.544 16:08:17 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:52.544 16:08:17 -- pm/common@17 -- $ local monitor 00:02:52.544 16:08:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.544 16:08:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.544 16:08:17 -- pm/common@25 -- $ sleep 1 00:02:52.544 16:08:17 -- pm/common@21 -- $ date +%s 00:02:52.544 16:08:17 -- pm/common@21 -- $ date +%s 00:02:52.544 16:08:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732637297 00:02:52.544 16:08:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732637297 00:02:52.544 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732637297_collect-vmstat.pm.log 00:02:52.544 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732637297_collect-cpu-load.pm.log 00:02:53.482 16:08:18 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:53.482 16:08:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:53.482 16:08:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:53.482 16:08:18 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:53.482 16:08:18 -- spdk/autobuild.sh@16 -- $ date -u 00:02:53.482 Tue Nov 26 04:08:18 PM UTC 2024 00:02:53.482 16:08:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:53.482 v25.01-pre-240-g2a91567e4 00:02:53.482 16:08:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:53.482 16:08:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:53.482 16:08:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:53.482 16:08:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:53.482 16:08:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:53.482 16:08:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.482 ************************************ 00:02:53.482 START TEST ubsan 00:02:53.482 ************************************ 00:02:53.482 using ubsan 00:02:53.482 16:08:18 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:53.482 00:02:53.482 real 0m0.000s 00:02:53.482 user 0m0.000s 00:02:53.482 sys 0m0.000s 00:02:53.482 16:08:18 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:53.482 ************************************ 00:02:53.482 16:08:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:53.482 END TEST ubsan 00:02:53.482 ************************************ 00:02:53.482 16:08:19 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:53.482 16:08:19 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:53.482 16:08:19 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:53.482 16:08:19 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:53.482 16:08:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:53.482 16:08:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.482 ************************************ 00:02:53.482 START TEST build_native_dpdk 00:02:53.482 ************************************ 00:02:53.482 16:08:19 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:53.482 16:08:19 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:53.483 caf0f5d395 version: 22.11.4 00:02:53.483 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:53.483 dc9c799c7d vhost: fix missing spinlock unlock 00:02:53.483 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:53.483 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:53.483 patching file config/rte_config.h 00:02:53.483 Hunk #1 succeeded at 60 (offset 1 line). 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:53.483 patching file lib/pcapng/rte_pcapng.c 00:02:53.483 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:53.483 16:08:19 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:53.483 16:08:19 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:53.741 16:08:19 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:53.741 16:08:19 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:53.741 16:08:19 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:59.009 The Meson build system 00:02:59.009 Version: 1.5.0 00:02:59.009 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:59.009 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:59.009 Build type: native build 00:02:59.009 Program cat found: YES (/usr/bin/cat) 00:02:59.009 Project name: DPDK 00:02:59.009 Project version: 22.11.4 00:02:59.009 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:59.009 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:59.009 Host machine cpu family: x86_64 00:02:59.009 Host machine cpu: x86_64 00:02:59.009 Message: ## Building in Developer Mode ## 00:02:59.009 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:59.009 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:59.009 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:59.009 Program objdump found: YES (/usr/bin/objdump) 00:02:59.009 Program python3 found: YES (/usr/bin/python3) 00:02:59.009 Program cat found: YES (/usr/bin/cat) 00:02:59.009 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:59.009 Checking for size of "void *" : 8 00:02:59.009 Checking for size of "void *" : 8 (cached) 00:02:59.009 Library m found: YES 00:02:59.009 Library numa found: YES 00:02:59.009 Has header "numaif.h" : YES 00:02:59.009 Library fdt found: NO 00:02:59.009 Library execinfo found: NO 00:02:59.009 Has header "execinfo.h" : YES 00:02:59.009 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:59.009 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:59.009 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:59.009 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:59.009 Run-time dependency openssl found: YES 3.1.1 00:02:59.009 Run-time dependency libpcap found: YES 1.10.4 00:02:59.009 Has header "pcap.h" with dependency libpcap: YES 00:02:59.009 Compiler for C supports arguments -Wcast-qual: YES 00:02:59.009 Compiler for C supports arguments -Wdeprecated: YES 00:02:59.009 Compiler for C supports arguments -Wformat: YES 00:02:59.009 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:59.009 Compiler for C supports arguments -Wformat-security: NO 00:02:59.009 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:59.009 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:59.009 Compiler for C supports arguments -Wnested-externs: YES 00:02:59.009 Compiler for C supports arguments -Wold-style-definition: YES 00:02:59.009 Compiler for C supports arguments -Wpointer-arith: YES 00:02:59.009 Compiler for C supports arguments -Wsign-compare: YES 00:02:59.009 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:59.009 Compiler for C supports arguments -Wundef: YES 00:02:59.009 Compiler for C supports arguments -Wwrite-strings: YES 00:02:59.009 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:59.009 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:59.009 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:59.009 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:59.009 Compiler for C supports arguments -mavx512f: YES 00:02:59.009 Checking if "AVX512 checking" compiles: YES 00:02:59.009 Fetching value of define "__SSE4_2__" : 1 00:02:59.009 Fetching value of define "__AES__" : 1 00:02:59.009 Fetching value of define "__AVX__" : 1 00:02:59.009 Fetching value of define "__AVX2__" : 1 00:02:59.009 Fetching value of define "__AVX512BW__" : (undefined) 00:02:59.009 Fetching value of define "__AVX512CD__" : (undefined) 00:02:59.009 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:59.009 Fetching value of define "__AVX512F__" : (undefined) 00:02:59.009 Fetching value of define "__AVX512VL__" : (undefined) 00:02:59.009 Fetching value of define "__PCLMUL__" : 1 00:02:59.009 Fetching value of define "__RDRND__" : 1 00:02:59.009 Fetching value of define "__RDSEED__" : 1 00:02:59.009 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:59.009 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:59.009 Message: lib/kvargs: Defining dependency "kvargs" 00:02:59.009 Message: lib/telemetry: Defining dependency "telemetry" 00:02:59.009 Checking for function "getentropy" : YES 00:02:59.009 Message: lib/eal: Defining dependency "eal" 00:02:59.009 Message: lib/ring: Defining dependency "ring" 00:02:59.009 Message: lib/rcu: Defining dependency "rcu" 00:02:59.009 Message: lib/mempool: Defining dependency "mempool" 00:02:59.009 Message: lib/mbuf: Defining dependency "mbuf" 00:02:59.009 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:59.009 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:59.009 Compiler for C supports arguments -mpclmul: YES 00:02:59.009 Compiler for C supports arguments -maes: YES 00:02:59.009 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:59.009 Compiler for C supports arguments -mavx512bw: YES 00:02:59.009 Compiler for C supports arguments -mavx512dq: YES 00:02:59.009 Compiler for C supports arguments -mavx512vl: YES 00:02:59.009 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:59.009 Compiler for C supports arguments -mavx2: YES 00:02:59.009 Compiler for C supports arguments -mavx: YES 00:02:59.009 Message: lib/net: Defining dependency "net" 00:02:59.009 Message: lib/meter: Defining dependency "meter" 00:02:59.009 Message: lib/ethdev: Defining dependency "ethdev" 00:02:59.009 Message: lib/pci: Defining dependency "pci" 00:02:59.009 Message: lib/cmdline: Defining dependency "cmdline" 00:02:59.009 Message: lib/metrics: Defining dependency "metrics" 00:02:59.009 Message: lib/hash: Defining dependency "hash" 00:02:59.009 Message: lib/timer: Defining dependency "timer" 00:02:59.009 Fetching value of define "__AVX2__" : 1 (cached) 00:02:59.009 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:59.009 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:59.009 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:59.009 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:59.009 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:59.009 Message: lib/acl: Defining dependency "acl" 00:02:59.009 Message: lib/bbdev: Defining dependency "bbdev" 00:02:59.009 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:59.009 Run-time dependency libelf found: YES 0.191 00:02:59.009 Message: lib/bpf: Defining dependency "bpf" 00:02:59.009 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:59.009 Message: lib/compressdev: Defining dependency "compressdev" 00:02:59.009 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:59.009 Message: lib/distributor: Defining dependency "distributor" 00:02:59.009 Message: lib/efd: Defining dependency "efd" 00:02:59.009 Message: lib/eventdev: Defining dependency "eventdev" 00:02:59.009 Message: lib/gpudev: Defining dependency "gpudev" 00:02:59.009 Message: lib/gro: Defining dependency "gro" 00:02:59.009 Message: lib/gso: Defining dependency "gso" 00:02:59.009 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:59.009 Message: lib/jobstats: Defining dependency "jobstats" 00:02:59.009 Message: lib/latencystats: Defining dependency "latencystats" 00:02:59.009 Message: lib/lpm: Defining dependency "lpm" 00:02:59.009 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:59.009 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:59.009 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:59.009 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:59.009 Message: lib/member: Defining dependency "member" 00:02:59.009 Message: lib/pcapng: Defining dependency "pcapng" 00:02:59.009 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:59.009 Message: lib/power: Defining dependency "power" 00:02:59.009 Message: lib/rawdev: Defining dependency "rawdev" 00:02:59.009 Message: lib/regexdev: Defining dependency "regexdev" 00:02:59.009 Message: lib/dmadev: Defining dependency "dmadev" 00:02:59.009 Message: lib/rib: Defining dependency "rib" 00:02:59.009 Message: lib/reorder: Defining dependency "reorder" 00:02:59.009 Message: lib/sched: Defining dependency "sched" 00:02:59.009 Message: lib/security: Defining dependency "security" 00:02:59.009 Message: lib/stack: Defining dependency "stack" 00:02:59.009 Has header "linux/userfaultfd.h" : YES 00:02:59.009 Message: lib/vhost: Defining dependency "vhost" 00:02:59.009 Message: lib/ipsec: Defining dependency "ipsec" 00:02:59.009 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:59.009 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:59.009 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:59.009 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:59.009 Message: lib/fib: Defining dependency "fib" 00:02:59.009 Message: lib/port: Defining dependency "port" 00:02:59.009 Message: lib/pdump: Defining dependency "pdump" 00:02:59.009 Message: lib/table: Defining dependency "table" 00:02:59.009 Message: lib/pipeline: Defining dependency "pipeline" 00:02:59.010 Message: lib/graph: Defining dependency "graph" 00:02:59.010 Message: lib/node: Defining dependency "node" 00:02:59.010 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:59.010 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:59.010 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:59.010 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:59.010 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:59.010 Compiler for C supports arguments -Wno-unused-value: YES 00:02:59.010 Compiler for C supports arguments -Wno-format: YES 00:02:59.010 Compiler for C supports arguments -Wno-format-security: YES 00:02:59.010 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:00.384 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:00.384 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:00.384 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:00.384 Fetching value of define "__AVX2__" : 1 (cached) 00:03:00.384 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:00.384 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:00.384 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:00.384 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:00.384 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:00.384 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:00.384 Configuring doxy-api.conf using configuration 00:03:00.384 Program sphinx-build found: NO 00:03:00.384 Configuring rte_build_config.h using configuration 00:03:00.384 Message: 00:03:00.384 ================= 00:03:00.384 Applications Enabled 00:03:00.384 ================= 00:03:00.384 00:03:00.384 apps: 00:03:00.384 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:03:00.384 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:03:00.384 test-security-perf, 00:03:00.384 00:03:00.384 Message: 00:03:00.384 ================= 00:03:00.384 Libraries Enabled 00:03:00.384 ================= 00:03:00.384 00:03:00.384 libs: 00:03:00.384 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:03:00.384 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:03:00.384 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:03:00.384 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:03:00.384 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:03:00.384 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:03:00.384 table, pipeline, graph, node, 00:03:00.384 00:03:00.384 Message: 00:03:00.384 =============== 00:03:00.384 Drivers Enabled 00:03:00.384 =============== 00:03:00.384 00:03:00.384 common: 00:03:00.384 00:03:00.384 bus: 00:03:00.384 pci, vdev, 00:03:00.384 mempool: 00:03:00.384 ring, 00:03:00.384 dma: 00:03:00.384 00:03:00.384 net: 00:03:00.384 i40e, 00:03:00.384 raw: 00:03:00.384 00:03:00.384 crypto: 00:03:00.384 00:03:00.384 compress: 00:03:00.384 00:03:00.384 regex: 00:03:00.384 00:03:00.384 vdpa: 00:03:00.384 00:03:00.384 event: 00:03:00.384 00:03:00.384 baseband: 00:03:00.384 00:03:00.384 gpu: 00:03:00.384 00:03:00.384 00:03:00.384 Message: 00:03:00.384 ================= 00:03:00.384 Content Skipped 00:03:00.384 ================= 00:03:00.384 00:03:00.384 apps: 00:03:00.384 00:03:00.384 libs: 00:03:00.384 kni: explicitly disabled via build config (deprecated lib) 00:03:00.384 flow_classify: explicitly disabled via build config (deprecated lib) 00:03:00.384 00:03:00.384 drivers: 00:03:00.384 common/cpt: not in enabled drivers build config 00:03:00.384 common/dpaax: not in enabled drivers build config 00:03:00.384 common/iavf: not in enabled drivers build config 00:03:00.384 common/idpf: not in enabled drivers build config 00:03:00.384 common/mvep: not in enabled drivers build config 00:03:00.384 common/octeontx: not in enabled drivers build config 00:03:00.384 bus/auxiliary: not in enabled drivers build config 00:03:00.384 bus/dpaa: not in enabled drivers build config 00:03:00.384 bus/fslmc: not in enabled drivers build config 00:03:00.384 bus/ifpga: not in enabled drivers build config 00:03:00.384 bus/vmbus: not in enabled drivers build config 00:03:00.384 common/cnxk: not in enabled drivers build config 00:03:00.384 common/mlx5: not in enabled drivers build config 00:03:00.384 common/qat: not in enabled drivers build config 00:03:00.384 common/sfc_efx: not in enabled drivers build config 00:03:00.384 mempool/bucket: not in enabled drivers build config 00:03:00.384 mempool/cnxk: not in enabled drivers build config 00:03:00.384 mempool/dpaa: not in enabled drivers build config 00:03:00.384 mempool/dpaa2: not in enabled drivers build config 00:03:00.384 mempool/octeontx: not in enabled drivers build config 00:03:00.384 mempool/stack: not in enabled drivers build config 00:03:00.384 dma/cnxk: not in enabled drivers build config 00:03:00.384 dma/dpaa: not in enabled drivers build config 00:03:00.384 dma/dpaa2: not in enabled drivers build config 00:03:00.384 dma/hisilicon: not in enabled drivers build config 00:03:00.384 dma/idxd: not in enabled drivers build config 00:03:00.384 dma/ioat: not in enabled drivers build config 00:03:00.384 dma/skeleton: not in enabled drivers build config 00:03:00.384 net/af_packet: not in enabled drivers build config 00:03:00.384 net/af_xdp: not in enabled drivers build config 00:03:00.384 net/ark: not in enabled drivers build config 00:03:00.384 net/atlantic: not in enabled drivers build config 00:03:00.384 net/avp: not in enabled drivers build config 00:03:00.384 net/axgbe: not in enabled drivers build config 00:03:00.384 net/bnx2x: not in enabled drivers build config 00:03:00.384 net/bnxt: not in enabled drivers build config 00:03:00.384 net/bonding: not in enabled drivers build config 00:03:00.384 net/cnxk: not in enabled drivers build config 00:03:00.384 net/cxgbe: not in enabled drivers build config 00:03:00.384 net/dpaa: not in enabled drivers build config 00:03:00.384 net/dpaa2: not in enabled drivers build config 00:03:00.384 net/e1000: not in enabled drivers build config 00:03:00.384 net/ena: not in enabled drivers build config 00:03:00.384 net/enetc: not in enabled drivers build config 00:03:00.384 net/enetfec: not in enabled drivers build config 00:03:00.384 net/enic: not in enabled drivers build config 00:03:00.384 net/failsafe: not in enabled drivers build config 00:03:00.384 net/fm10k: not in enabled drivers build config 00:03:00.384 net/gve: not in enabled drivers build config 00:03:00.384 net/hinic: not in enabled drivers build config 00:03:00.384 net/hns3: not in enabled drivers build config 00:03:00.384 net/iavf: not in enabled drivers build config 00:03:00.384 net/ice: not in enabled drivers build config 00:03:00.384 net/idpf: not in enabled drivers build config 00:03:00.384 net/igc: not in enabled drivers build config 00:03:00.384 net/ionic: not in enabled drivers build config 00:03:00.384 net/ipn3ke: not in enabled drivers build config 00:03:00.384 net/ixgbe: not in enabled drivers build config 00:03:00.384 net/kni: not in enabled drivers build config 00:03:00.384 net/liquidio: not in enabled drivers build config 00:03:00.384 net/mana: not in enabled drivers build config 00:03:00.384 net/memif: not in enabled drivers build config 00:03:00.384 net/mlx4: not in enabled drivers build config 00:03:00.384 net/mlx5: not in enabled drivers build config 00:03:00.384 net/mvneta: not in enabled drivers build config 00:03:00.384 net/mvpp2: not in enabled drivers build config 00:03:00.384 net/netvsc: not in enabled drivers build config 00:03:00.384 net/nfb: not in enabled drivers build config 00:03:00.384 net/nfp: not in enabled drivers build config 00:03:00.384 net/ngbe: not in enabled drivers build config 00:03:00.384 net/null: not in enabled drivers build config 00:03:00.384 net/octeontx: not in enabled drivers build config 00:03:00.384 net/octeon_ep: not in enabled drivers build config 00:03:00.384 net/pcap: not in enabled drivers build config 00:03:00.384 net/pfe: not in enabled drivers build config 00:03:00.384 net/qede: not in enabled drivers build config 00:03:00.384 net/ring: not in enabled drivers build config 00:03:00.384 net/sfc: not in enabled drivers build config 00:03:00.384 net/softnic: not in enabled drivers build config 00:03:00.384 net/tap: not in enabled drivers build config 00:03:00.384 net/thunderx: not in enabled drivers build config 00:03:00.384 net/txgbe: not in enabled drivers build config 00:03:00.384 net/vdev_netvsc: not in enabled drivers build config 00:03:00.384 net/vhost: not in enabled drivers build config 00:03:00.384 net/virtio: not in enabled drivers build config 00:03:00.384 net/vmxnet3: not in enabled drivers build config 00:03:00.384 raw/cnxk_bphy: not in enabled drivers build config 00:03:00.384 raw/cnxk_gpio: not in enabled drivers build config 00:03:00.384 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:00.384 raw/ifpga: not in enabled drivers build config 00:03:00.385 raw/ntb: not in enabled drivers build config 00:03:00.385 raw/skeleton: not in enabled drivers build config 00:03:00.385 crypto/armv8: not in enabled drivers build config 00:03:00.385 crypto/bcmfs: not in enabled drivers build config 00:03:00.385 crypto/caam_jr: not in enabled drivers build config 00:03:00.385 crypto/ccp: not in enabled drivers build config 00:03:00.385 crypto/cnxk: not in enabled drivers build config 00:03:00.385 crypto/dpaa_sec: not in enabled drivers build config 00:03:00.385 crypto/dpaa2_sec: not in enabled drivers build config 00:03:00.385 crypto/ipsec_mb: not in enabled drivers build config 00:03:00.385 crypto/mlx5: not in enabled drivers build config 00:03:00.385 crypto/mvsam: not in enabled drivers build config 00:03:00.385 crypto/nitrox: not in enabled drivers build config 00:03:00.385 crypto/null: not in enabled drivers build config 00:03:00.385 crypto/octeontx: not in enabled drivers build config 00:03:00.385 crypto/openssl: not in enabled drivers build config 00:03:00.385 crypto/scheduler: not in enabled drivers build config 00:03:00.385 crypto/uadk: not in enabled drivers build config 00:03:00.385 crypto/virtio: not in enabled drivers build config 00:03:00.385 compress/isal: not in enabled drivers build config 00:03:00.385 compress/mlx5: not in enabled drivers build config 00:03:00.385 compress/octeontx: not in enabled drivers build config 00:03:00.385 compress/zlib: not in enabled drivers build config 00:03:00.385 regex/mlx5: not in enabled drivers build config 00:03:00.385 regex/cn9k: not in enabled drivers build config 00:03:00.385 vdpa/ifc: not in enabled drivers build config 00:03:00.385 vdpa/mlx5: not in enabled drivers build config 00:03:00.385 vdpa/sfc: not in enabled drivers build config 00:03:00.385 event/cnxk: not in enabled drivers build config 00:03:00.385 event/dlb2: not in enabled drivers build config 00:03:00.385 event/dpaa: not in enabled drivers build config 00:03:00.385 event/dpaa2: not in enabled drivers build config 00:03:00.385 event/dsw: not in enabled drivers build config 00:03:00.385 event/opdl: not in enabled drivers build config 00:03:00.385 event/skeleton: not in enabled drivers build config 00:03:00.385 event/sw: not in enabled drivers build config 00:03:00.385 event/octeontx: not in enabled drivers build config 00:03:00.385 baseband/acc: not in enabled drivers build config 00:03:00.385 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:00.385 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:00.385 baseband/la12xx: not in enabled drivers build config 00:03:00.385 baseband/null: not in enabled drivers build config 00:03:00.385 baseband/turbo_sw: not in enabled drivers build config 00:03:00.385 gpu/cuda: not in enabled drivers build config 00:03:00.385 00:03:00.385 00:03:00.385 Build targets in project: 314 00:03:00.385 00:03:00.385 DPDK 22.11.4 00:03:00.385 00:03:00.385 User defined options 00:03:00.385 libdir : lib 00:03:00.385 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:00.385 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:00.385 c_link_args : 00:03:00.385 enable_docs : false 00:03:00.385 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:00.385 enable_kmods : false 00:03:00.385 machine : native 00:03:00.385 tests : false 00:03:00.385 00:03:00.385 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:00.385 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:00.643 16:08:26 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:00.643 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:00.643 [1/743] Generating lib/rte_telemetry_mingw with a custom command 00:03:00.643 [2/743] Generating lib/rte_kvargs_def with a custom command 00:03:00.643 [3/743] Generating lib/rte_telemetry_def with a custom command 00:03:00.643 [4/743] Generating lib/rte_kvargs_mingw with a custom command 00:03:00.643 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:00.643 [6/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:00.643 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:00.643 [8/743] Linking static target lib/librte_kvargs.a 00:03:00.643 [9/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:00.902 [10/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:00.902 [11/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:00.902 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:00.902 [13/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:00.902 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:00.902 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:00.902 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:00.902 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:00.902 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:00.902 [19/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.160 [20/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:01.160 [21/743] Linking target lib/librte_kvargs.so.23.0 00:03:01.160 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:01.160 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:03:01.160 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:01.160 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:01.160 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:01.160 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:01.160 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:01.160 [29/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:01.160 [30/743] Linking static target lib/librte_telemetry.a 00:03:01.418 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:01.419 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:01.419 [33/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:01.419 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:01.419 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:01.419 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:01.419 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:01.419 [38/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:01.419 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:01.419 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:01.419 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:01.677 [42/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.677 [43/743] Linking target lib/librte_telemetry.so.23.0 00:03:01.677 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:01.677 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:01.677 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:01.677 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:01.677 [48/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:01.677 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:01.935 [50/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:01.935 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:01.935 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:01.935 [53/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:01.935 [54/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:01.935 [55/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:01.935 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:01.935 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:01.935 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:01.935 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:01.935 [60/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:01.935 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:01.935 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:01.935 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:01.935 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:02.193 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:03:02.193 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:02.193 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:02.193 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:02.193 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:02.193 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:02.193 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:02.193 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:02.193 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:02.193 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:02.193 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:02.193 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:02.193 [77/743] Generating lib/rte_eal_def with a custom command 00:03:02.452 [78/743] Generating lib/rte_eal_mingw with a custom command 00:03:02.452 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:02.452 [80/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:02.452 [81/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:02.452 [82/743] Generating lib/rte_ring_def with a custom command 00:03:02.452 [83/743] Generating lib/rte_ring_mingw with a custom command 00:03:02.452 [84/743] Generating lib/rte_rcu_def with a custom command 00:03:02.452 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:03:02.452 [86/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:02.452 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:02.452 [88/743] Linking static target lib/librte_ring.a 00:03:02.452 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:02.452 [90/743] Generating lib/rte_mempool_def with a custom command 00:03:02.452 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:03:02.710 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:02.710 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:02.710 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.968 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:02.968 [96/743] Linking static target lib/librte_eal.a 00:03:02.968 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:02.968 [98/743] Generating lib/rte_mbuf_def with a custom command 00:03:02.968 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:02.968 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:03:02.968 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:03.226 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:03.226 [103/743] Linking static target lib/librte_rcu.a 00:03:03.226 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:03.226 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:03.484 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:03.484 [107/743] Linking static target lib/librte_mempool.a 00:03:03.484 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.484 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:03.746 [110/743] Generating lib/rte_net_def with a custom command 00:03:03.746 [111/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:03.746 [112/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:03.746 [113/743] Generating lib/rte_net_mingw with a custom command 00:03:03.746 [114/743] Generating lib/rte_meter_def with a custom command 00:03:03.746 [115/743] Generating lib/rte_meter_mingw with a custom command 00:03:03.746 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:03.746 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:03.746 [118/743] Linking static target lib/librte_meter.a 00:03:03.746 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:04.008 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:04.008 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:04.008 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.008 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:04.008 [124/743] Linking static target lib/librte_mbuf.a 00:03:04.266 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:04.266 [126/743] Linking static target lib/librte_net.a 00:03:04.266 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.523 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.523 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:04.523 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:04.523 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:04.523 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:04.781 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.781 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:05.039 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:05.343 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:05.343 [137/743] Generating lib/rte_ethdev_def with a custom command 00:03:05.343 [138/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:05.343 [139/743] Generating lib/rte_ethdev_mingw with a custom command 00:03:05.343 [140/743] Generating lib/rte_pci_def with a custom command 00:03:05.343 [141/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:05.343 [142/743] Generating lib/rte_pci_mingw with a custom command 00:03:05.343 [143/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:05.343 [144/743] Linking static target lib/librte_pci.a 00:03:05.601 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:05.601 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:05.601 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:05.602 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:05.602 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:05.602 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:05.602 [151/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.602 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:05.602 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:05.860 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:05.860 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:05.860 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:05.860 [157/743] Generating lib/rte_cmdline_def with a custom command 00:03:05.860 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:03:05.860 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:05.860 [160/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:05.860 [161/743] Generating lib/rte_metrics_def with a custom command 00:03:05.860 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:03:05.860 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:05.860 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:06.118 [165/743] Generating lib/rte_hash_def with a custom command 00:03:06.118 [166/743] Generating lib/rte_hash_mingw with a custom command 00:03:06.118 [167/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:06.118 [168/743] Generating lib/rte_timer_def with a custom command 00:03:06.118 [169/743] Generating lib/rte_timer_mingw with a custom command 00:03:06.118 [170/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:06.118 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:06.118 [172/743] Linking static target lib/librte_cmdline.a 00:03:06.118 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:06.376 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:06.376 [175/743] Linking static target lib/librte_metrics.a 00:03:06.634 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:06.634 [177/743] Linking static target lib/librte_timer.a 00:03:06.893 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.893 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.152 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:07.152 [181/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:07.152 [182/743] Linking static target lib/librte_ethdev.a 00:03:07.152 [183/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:07.152 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.718 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:07.718 [186/743] Generating lib/rte_acl_def with a custom command 00:03:07.718 [187/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:07.718 [188/743] Generating lib/rte_acl_mingw with a custom command 00:03:07.718 [189/743] Generating lib/rte_bbdev_def with a custom command 00:03:07.718 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:03:07.718 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:07.718 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:03:07.976 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:03:08.235 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:08.493 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:08.493 [196/743] Linking static target lib/librte_bitratestats.a 00:03:08.493 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:08.752 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.752 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:08.752 [200/743] Linking static target lib/librte_bbdev.a 00:03:08.752 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:09.010 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:09.010 [203/743] Linking static target lib/librte_hash.a 00:03:09.269 [204/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.269 [205/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:03:09.269 [206/743] Linking static target lib/acl/libavx512_tmp.a 00:03:09.269 [207/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:09.269 [208/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:09.269 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:09.836 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.836 [211/743] Generating lib/rte_bpf_def with a custom command 00:03:09.836 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:03:09.836 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:09.836 [214/743] Generating lib/rte_cfgfile_def with a custom command 00:03:09.836 [215/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:09.836 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:03:10.094 [217/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:10.094 [218/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:10.094 [219/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:10.094 [220/743] Linking static target lib/librte_cfgfile.a 00:03:10.094 [221/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:10.094 [222/743] Linking static target lib/librte_acl.a 00:03:10.094 [223/743] Generating lib/rte_compressdev_def with a custom command 00:03:10.094 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:03:10.353 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.353 [226/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.611 [227/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:10.611 [228/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:10.611 [229/743] Generating lib/rte_cryptodev_def with a custom command 00:03:10.611 [230/743] Generating lib/rte_cryptodev_mingw with a custom command 00:03:10.611 [231/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.611 [232/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:10.611 [233/743] Linking target lib/librte_eal.so.23.0 00:03:10.611 [234/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:10.611 [235/743] Linking static target lib/librte_bpf.a 00:03:10.869 [236/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:10.869 [237/743] Linking static target lib/librte_compressdev.a 00:03:10.869 [238/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:10.869 [239/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:10.869 [240/743] Linking target lib/librte_ring.so.23.0 00:03:10.869 [241/743] Linking target lib/librte_meter.so.23.0 00:03:11.127 [242/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:11.127 [243/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:11.127 [244/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.127 [245/743] Linking target lib/librte_rcu.so.23.0 00:03:11.127 [246/743] Linking target lib/librte_mempool.so.23.0 00:03:11.127 [247/743] Linking target lib/librte_pci.so.23.0 00:03:11.127 [248/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:11.127 [249/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:11.127 [250/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:11.127 [251/743] Linking target lib/librte_timer.so.23.0 00:03:11.385 [252/743] Linking target lib/librte_mbuf.so.23.0 00:03:11.385 [253/743] Linking target lib/librte_acl.so.23.0 00:03:11.385 [254/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:11.385 [255/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:11.385 [256/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:11.385 [257/743] Linking target lib/librte_cfgfile.so.23.0 00:03:11.385 [258/743] Generating lib/rte_distributor_def with a custom command 00:03:11.385 [259/743] Generating lib/rte_distributor_mingw with a custom command 00:03:11.385 [260/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:11.385 [261/743] Generating lib/rte_efd_def with a custom command 00:03:11.385 [262/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:11.385 [263/743] Generating lib/rte_efd_mingw with a custom command 00:03:11.385 [264/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:11.385 [265/743] Linking target lib/librte_net.so.23.0 00:03:11.385 [266/743] Linking target lib/librte_bbdev.so.23.0 00:03:11.644 [267/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:11.644 [268/743] Linking static target lib/librte_distributor.a 00:03:11.644 [269/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:11.644 [270/743] Linking target lib/librte_cmdline.so.23.0 00:03:11.644 [271/743] Linking target lib/librte_hash.so.23.0 00:03:11.644 [272/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.644 [273/743] Linking target lib/librte_compressdev.so.23.0 00:03:11.902 [274/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:11.902 [275/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.902 [276/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.902 [277/743] Linking target lib/librte_distributor.so.23.0 00:03:11.902 [278/743] Linking target lib/librte_ethdev.so.23.0 00:03:11.902 [279/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:12.160 [280/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:12.160 [281/743] Linking target lib/librte_metrics.so.23.0 00:03:12.160 [282/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:12.160 [283/743] Linking target lib/librte_bpf.so.23.0 00:03:12.160 [284/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:12.160 [285/743] Linking target lib/librte_bitratestats.so.23.0 00:03:12.419 [286/743] Generating lib/rte_eventdev_def with a custom command 00:03:12.419 [287/743] Generating lib/rte_eventdev_mingw with a custom command 00:03:12.419 [288/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:12.419 [289/743] Generating lib/rte_gpudev_def with a custom command 00:03:12.419 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:03:12.419 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:12.986 [292/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:12.986 [293/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:12.986 [294/743] Linking static target lib/librte_efd.a 00:03:12.986 [295/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:12.986 [296/743] Linking static target lib/librte_cryptodev.a 00:03:13.244 [297/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.244 [298/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:13.244 [299/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:13.244 [300/743] Linking target lib/librte_efd.so.23.0 00:03:13.244 [301/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:13.244 [302/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:13.244 [303/743] Generating lib/rte_gro_def with a custom command 00:03:13.244 [304/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:13.244 [305/743] Generating lib/rte_gro_mingw with a custom command 00:03:13.244 [306/743] Linking static target lib/librte_gpudev.a 00:03:13.503 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:13.768 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:14.025 [309/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:14.025 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:14.025 [311/743] Generating lib/rte_gso_def with a custom command 00:03:14.025 [312/743] Generating lib/rte_gso_mingw with a custom command 00:03:14.025 [313/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.025 [314/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:14.025 [315/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:14.283 [316/743] Linking target lib/librte_gpudev.so.23.0 00:03:14.283 [317/743] Linking static target lib/librte_gro.a 00:03:14.283 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:14.283 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:14.283 [320/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.283 [321/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:14.541 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:14.541 [323/743] Linking static target lib/librte_eventdev.a 00:03:14.541 [324/743] Linking target lib/librte_gro.so.23.0 00:03:14.541 [325/743] Generating lib/rte_ip_frag_def with a custom command 00:03:14.541 [326/743] Generating lib/rte_ip_frag_mingw with a custom command 00:03:14.541 [327/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:14.541 [328/743] Linking static target lib/librte_gso.a 00:03:14.541 [329/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:14.541 [330/743] Linking static target lib/librte_jobstats.a 00:03:14.799 [331/743] Generating lib/rte_jobstats_def with a custom command 00:03:14.799 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:03:14.799 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.799 [334/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:14.799 [335/743] Linking target lib/librte_gso.so.23.0 00:03:14.799 [336/743] Generating lib/rte_latencystats_def with a custom command 00:03:14.799 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:03:14.799 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:15.057 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:15.057 [340/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.057 [341/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:15.057 [342/743] Generating lib/rte_lpm_def with a custom command 00:03:15.057 [343/743] Linking target lib/librte_jobstats.so.23.0 00:03:15.057 [344/743] Generating lib/rte_lpm_mingw with a custom command 00:03:15.057 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:15.315 [346/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.315 [347/743] Linking target lib/librte_cryptodev.so.23.0 00:03:15.315 [348/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:15.315 [349/743] Linking static target lib/librte_ip_frag.a 00:03:15.315 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:15.572 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.572 [352/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:15.572 [353/743] Linking static target lib/librte_latencystats.a 00:03:15.830 [354/743] Linking target lib/librte_ip_frag.so.23.0 00:03:15.830 [355/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:15.830 [356/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:15.830 [357/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:15.830 [358/743] Generating lib/rte_member_def with a custom command 00:03:15.830 [359/743] Generating lib/rte_member_mingw with a custom command 00:03:15.830 [360/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:15.830 [361/743] Generating lib/rte_pcapng_def with a custom command 00:03:15.830 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:03:15.830 [363/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.830 [364/743] Linking target lib/librte_latencystats.so.23.0 00:03:16.088 [365/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:16.088 [366/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:16.088 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:16.088 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:16.088 [369/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:16.088 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:16.346 [371/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:16.346 [372/743] Linking static target lib/librte_lpm.a 00:03:16.346 [373/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:16.346 [374/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.604 [375/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:16.604 [376/743] Linking target lib/librte_eventdev.so.23.0 00:03:16.604 [377/743] Generating lib/rte_power_def with a custom command 00:03:16.604 [378/743] Generating lib/rte_power_mingw with a custom command 00:03:16.604 [379/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:16.604 [380/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:16.604 [381/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.604 [382/743] Generating lib/rte_rawdev_def with a custom command 00:03:16.604 [383/743] Generating lib/rte_rawdev_mingw with a custom command 00:03:16.862 [384/743] Linking target lib/librte_lpm.so.23.0 00:03:16.862 [385/743] Generating lib/rte_regexdev_def with a custom command 00:03:16.862 [386/743] Generating lib/rte_regexdev_mingw with a custom command 00:03:16.862 [387/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:16.862 [388/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:16.862 [389/743] Linking static target lib/librte_pcapng.a 00:03:16.862 [390/743] Generating lib/rte_dmadev_def with a custom command 00:03:16.862 [391/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:16.862 [392/743] Generating lib/rte_dmadev_mingw with a custom command 00:03:16.862 [393/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:16.862 [394/743] Generating lib/rte_rib_def with a custom command 00:03:16.862 [395/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:16.862 [396/743] Generating lib/rte_rib_mingw with a custom command 00:03:16.862 [397/743] Linking static target lib/librte_rawdev.a 00:03:17.121 [398/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:17.121 [399/743] Generating lib/rte_reorder_def with a custom command 00:03:17.121 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:03:17.121 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.121 [402/743] Linking target lib/librte_pcapng.so.23.0 00:03:17.121 [403/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:17.121 [404/743] Linking static target lib/librte_dmadev.a 00:03:17.121 [405/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:17.121 [406/743] Linking static target lib/librte_power.a 00:03:17.380 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:17.380 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.380 [409/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:17.380 [410/743] Linking target lib/librte_rawdev.so.23.0 00:03:17.380 [411/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:17.380 [412/743] Linking static target lib/librte_member.a 00:03:17.380 [413/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:17.380 [414/743] Linking static target lib/librte_regexdev.a 00:03:17.639 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:17.639 [416/743] Generating lib/rte_sched_def with a custom command 00:03:17.639 [417/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:17.639 [418/743] Generating lib/rte_sched_mingw with a custom command 00:03:17.639 [419/743] Generating lib/rte_security_def with a custom command 00:03:17.639 [420/743] Generating lib/rte_security_mingw with a custom command 00:03:17.639 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:17.639 [422/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.897 [423/743] Linking target lib/librte_dmadev.so.23.0 00:03:17.897 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:17.897 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:17.897 [426/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:17.897 [427/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.897 [428/743] Linking static target lib/librte_reorder.a 00:03:17.897 [429/743] Generating lib/rte_stack_def with a custom command 00:03:17.897 [430/743] Linking target lib/librte_member.so.23.0 00:03:17.897 [431/743] Generating lib/rte_stack_mingw with a custom command 00:03:17.897 [432/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:17.897 [433/743] Linking static target lib/librte_stack.a 00:03:17.897 [434/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:18.155 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:18.155 [436/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.155 [437/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:18.155 [438/743] Linking static target lib/librte_rib.a 00:03:18.155 [439/743] Linking target lib/librte_reorder.so.23.0 00:03:18.155 [440/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.155 [441/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.155 [442/743] Linking target lib/librte_stack.so.23.0 00:03:18.155 [443/743] Linking target lib/librte_power.so.23.0 00:03:18.155 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.413 [445/743] Linking target lib/librte_regexdev.so.23.0 00:03:18.676 [446/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.676 [447/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:18.676 [448/743] Linking static target lib/librte_security.a 00:03:18.676 [449/743] Linking target lib/librte_rib.so.23.0 00:03:18.676 [450/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:18.676 [451/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:18.676 [452/743] Generating lib/rte_vhost_def with a custom command 00:03:18.676 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:03:18.975 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:18.975 [455/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:18.975 [456/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.975 [457/743] Linking target lib/librte_security.so.23.0 00:03:19.236 [458/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:19.236 [459/743] Linking static target lib/librte_sched.a 00:03:19.236 [460/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:19.494 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.494 [462/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:19.752 [463/743] Linking target lib/librte_sched.so.23.0 00:03:19.752 [464/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:19.752 [465/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:19.752 [466/743] Generating lib/rte_ipsec_def with a custom command 00:03:19.752 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:03:19.752 [468/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:19.752 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:20.010 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:20.010 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:20.269 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:20.269 [473/743] Generating lib/rte_fib_def with a custom command 00:03:20.269 [474/743] Generating lib/rte_fib_mingw with a custom command 00:03:20.269 [475/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:20.269 [476/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:20.527 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:20.527 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:20.527 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:20.527 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:20.785 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:20.785 [482/743] Linking static target lib/librte_ipsec.a 00:03:21.044 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.044 [484/743] Linking target lib/librte_ipsec.so.23.0 00:03:21.303 [485/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:21.303 [486/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:21.303 [487/743] Linking static target lib/librte_fib.a 00:03:21.303 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:21.303 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:21.303 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:21.562 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:21.562 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.562 [493/743] Linking target lib/librte_fib.so.23.0 00:03:21.821 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:22.387 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:22.387 [496/743] Generating lib/rte_port_def with a custom command 00:03:22.387 [497/743] Generating lib/rte_port_mingw with a custom command 00:03:22.387 [498/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:22.387 [499/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:22.387 [500/743] Generating lib/rte_pdump_def with a custom command 00:03:22.387 [501/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:22.387 [502/743] Generating lib/rte_pdump_mingw with a custom command 00:03:22.387 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:22.645 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:22.645 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:22.645 [506/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:22.645 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:22.904 [508/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:22.904 [509/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:22.904 [510/743] Linking static target lib/librte_port.a 00:03:23.162 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:23.162 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:23.422 [513/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.422 [514/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:23.422 [515/743] Linking target lib/librte_port.so.23.0 00:03:23.422 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:23.687 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:23.687 [518/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:23.687 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:23.687 [520/743] Linking static target lib/librte_pdump.a 00:03:23.944 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.944 [522/743] Linking target lib/librte_pdump.so.23.0 00:03:23.944 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:23.944 [524/743] Generating lib/rte_table_def with a custom command 00:03:24.202 [525/743] Generating lib/rte_table_mingw with a custom command 00:03:24.202 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:24.202 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:24.460 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:24.460 [529/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:24.460 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:24.719 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:24.719 [532/743] Generating lib/rte_pipeline_def with a custom command 00:03:24.719 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:03:24.719 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:24.719 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:24.719 [536/743] Linking static target lib/librte_table.a 00:03:24.977 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:25.236 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:25.495 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:25.495 [540/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.495 [541/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:25.495 [542/743] Linking target lib/librte_table.so.23.0 00:03:25.495 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:25.754 [544/743] Generating lib/rte_graph_def with a custom command 00:03:25.754 [545/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:25.754 [546/743] Generating lib/rte_graph_mingw with a custom command 00:03:25.754 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:25.754 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:26.321 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:26.321 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:26.321 [551/743] Linking static target lib/librte_graph.a 00:03:26.321 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:26.580 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:26.580 [554/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:26.580 [555/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:26.838 [556/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:26.838 [557/743] Generating lib/rte_node_def with a custom command 00:03:27.098 [558/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:27.098 [559/743] Generating lib/rte_node_mingw with a custom command 00:03:27.098 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:27.098 [561/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:27.098 [562/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.098 [563/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:27.098 [564/743] Linking target lib/librte_graph.so.23.0 00:03:27.358 [565/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:27.358 [566/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:27.358 [567/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:27.358 [568/743] Generating drivers/rte_bus_pci_def with a custom command 00:03:27.358 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:27.358 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:27.358 [571/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:27.358 [572/743] Generating drivers/rte_bus_vdev_def with a custom command 00:03:27.358 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:27.358 [574/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:27.617 [575/743] Generating drivers/rte_mempool_ring_def with a custom command 00:03:27.617 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:27.618 [577/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:27.618 [578/743] Linking static target lib/librte_node.a 00:03:27.618 [579/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:27.618 [580/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:27.618 [581/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:27.877 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.877 [583/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:27.877 [584/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:27.877 [585/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:27.877 [586/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:27.877 [587/743] Linking static target drivers/librte_bus_vdev.a 00:03:27.877 [588/743] Linking target lib/librte_node.so.23.0 00:03:27.877 [589/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:28.137 [590/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:28.137 [591/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:28.137 [592/743] Linking static target drivers/librte_bus_pci.a 00:03:28.137 [593/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.137 [594/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:28.137 [595/743] Linking target drivers/librte_bus_vdev.so.23.0 00:03:28.396 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:28.396 [597/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.396 [598/743] Linking target drivers/librte_bus_pci.so.23.0 00:03:28.656 [599/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:28.656 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:28.656 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:28.656 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:28.656 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:28.656 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:28.915 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:28.915 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:28.915 [607/743] Linking static target drivers/librte_mempool_ring.a 00:03:28.915 [608/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:28.915 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:28.915 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:03:29.483 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:29.742 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:29.742 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:29.742 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:30.308 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:30.567 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:30.567 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:30.827 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:31.095 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:31.095 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:31.358 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:31.358 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:03:31.358 [623/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:31.358 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:31.358 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:32.294 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:32.552 [627/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:32.552 [628/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:32.811 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:32.811 [630/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:32.811 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:32.811 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:32.811 [633/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:32.811 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:33.379 [635/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:33.379 [636/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:33.638 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:33.896 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:33.896 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:33.896 [640/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:33.896 [641/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:33.896 [642/743] Linking static target lib/librte_vhost.a 00:03:34.154 [643/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:34.154 [644/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:34.154 [645/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:34.154 [646/743] Linking static target drivers/librte_net_i40e.a 00:03:34.154 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:34.154 [648/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:34.413 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:34.671 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:34.671 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:34.671 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:34.934 [653/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.934 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:34.934 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:34.934 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:35.217 [657/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.217 [658/743] Linking target lib/librte_vhost.so.23.0 00:03:35.217 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:35.815 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:35.815 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:35.815 [662/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:35.815 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:35.815 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:35.815 [665/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:36.073 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:36.073 [667/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:36.073 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:36.073 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:36.331 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:36.898 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:36.898 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:36.898 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:37.157 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:37.415 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:37.415 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:37.674 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:37.674 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:37.934 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:37.934 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:38.193 [681/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:38.193 [682/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:38.193 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:38.452 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:38.452 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:38.711 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:38.711 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:38.711 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:38.971 [689/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:38.971 [690/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:38.971 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:38.971 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:38.971 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:39.229 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:39.488 [695/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:39.747 [696/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:39.747 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:40.006 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:40.006 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:40.265 [700/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:40.265 [701/743] Linking static target lib/librte_pipeline.a 00:03:40.525 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:40.525 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:40.525 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:40.785 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:41.044 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:41.044 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:41.044 [708/743] Linking target app/dpdk-dumpcap 00:03:41.044 [709/743] Linking target app/dpdk-pdump 00:03:41.044 [710/743] Linking target app/dpdk-proc-info 00:03:41.044 [711/743] Linking target app/dpdk-test-acl 00:03:41.303 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:41.303 [713/743] Linking target app/dpdk-test-bbdev 00:03:41.303 [714/743] Linking target app/dpdk-test-cmdline 00:03:41.303 [715/743] Linking target app/dpdk-test-compress-perf 00:03:41.562 [716/743] Linking target app/dpdk-test-crypto-perf 00:03:41.562 [717/743] Linking target app/dpdk-test-eventdev 00:03:41.562 [718/743] Linking target app/dpdk-test-fib 00:03:41.822 [719/743] Linking target app/dpdk-test-gpudev 00:03:41.822 [720/743] Linking target app/dpdk-test-flow-perf 00:03:41.822 [721/743] Linking target app/dpdk-test-pipeline 00:03:42.390 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:42.390 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:42.390 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:42.390 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:42.649 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:42.649 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:42.908 [728/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.908 [729/743] Linking target lib/librte_pipeline.so.23.0 00:03:42.908 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:43.167 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:43.427 [732/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:43.427 [733/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:43.427 [734/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:43.427 [735/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:43.694 [736/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:43.957 [737/743] Linking target app/dpdk-test-sad 00:03:43.957 [738/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:43.957 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:43.957 [740/743] Linking target app/dpdk-test-regex 00:03:44.216 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:44.476 [742/743] Linking target app/dpdk-testpmd 00:03:44.735 [743/743] Linking target app/dpdk-test-security-perf 00:03:44.735 16:09:10 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:44.735 16:09:10 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:44.735 16:09:10 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:44.735 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:44.735 [0/1] Installing files. 00:03:44.997 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.997 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.998 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.999 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.000 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.001 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.002 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.002 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.262 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:45.263 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:45.263 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.263 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.263 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.263 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.263 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.263 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.263 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.263 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.263 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.263 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.263 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.263 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.263 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.263 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.263 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.526 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:45.527 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:45.527 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:45.527 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.527 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:45.527 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.527 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.528 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.529 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:45.530 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:45.530 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:45.530 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:45.530 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:45.530 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:45.530 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:45.530 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:45.530 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:45.530 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:45.530 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:45.530 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:45.530 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:45.530 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:45.530 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:45.530 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:45.530 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:45.530 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:45.530 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:45.530 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:45.530 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:45.530 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:45.530 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:45.530 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:45.530 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:45.530 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:45.530 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:45.530 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:45.530 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:45.530 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:45.530 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:45.530 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:45.530 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:45.530 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:45.530 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:45.531 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:45.531 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:45.531 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:45.531 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:45.531 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:45.531 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:45.531 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:45.531 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:45.531 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:45.531 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:45.531 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:45.531 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:45.531 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:45.531 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:45.531 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:45.531 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:45.531 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:45.531 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:45.531 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:45.531 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:45.531 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:45.531 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:45.531 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:45.531 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:45.531 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:45.531 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:45.531 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:45.531 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:45.531 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:45.531 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:45.531 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:45.531 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:45.531 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:45.531 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:45.531 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:45.531 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:45.531 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:45.531 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:45.531 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:45.531 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:45.531 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:45.531 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:45.531 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:45.531 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:45.531 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:45.531 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:45.531 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:45.531 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:45.531 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:45.531 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:45.531 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:45.531 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:45.531 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:45.531 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:45.531 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:45.531 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:45.531 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:45.531 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:45.531 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:45.531 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:45.531 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:45.531 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:45.531 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:45.531 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:45.531 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:45.531 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:45.531 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:45.531 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:45.531 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:45.531 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:45.531 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:45.531 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:45.531 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:45.531 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:45.531 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:45.531 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:45.531 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:45.531 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:45.531 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:45.531 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:45.532 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:45.532 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:45.532 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:45.532 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:45.532 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:45.532 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:45.532 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:45.532 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:45.532 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:45.532 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:45.532 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:45.532 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:45.792 16:09:11 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:45.792 16:09:11 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:45.792 00:03:45.792 real 0m52.171s 00:03:45.792 user 6m11.165s 00:03:45.792 sys 0m56.228s 00:03:45.792 ************************************ 00:03:45.792 END TEST build_native_dpdk 00:03:45.792 ************************************ 00:03:45.792 16:09:11 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:45.792 16:09:11 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:45.792 16:09:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:45.792 16:09:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:45.792 16:09:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:45.792 16:09:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:45.792 16:09:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:45.792 16:09:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:45.792 16:09:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:45.792 16:09:11 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:45.792 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:46.051 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:46.051 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:46.051 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:46.310 Using 'verbs' RDMA provider 00:03:59.893 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:14.896 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:14.896 Creating mk/config.mk...done. 00:04:14.896 Creating mk/cc.flags.mk...done. 00:04:14.896 Type 'make' to build. 00:04:14.896 16:09:38 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:14.896 16:09:38 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:14.896 16:09:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:14.896 16:09:38 -- common/autotest_common.sh@10 -- $ set +x 00:04:14.896 ************************************ 00:04:14.896 START TEST make 00:04:14.896 ************************************ 00:04:14.896 16:09:38 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:14.896 make[1]: Nothing to be done for 'all'. 00:05:11.140 CC lib/ut_mock/mock.o 00:05:11.140 CC lib/ut/ut.o 00:05:11.140 CC lib/log/log.o 00:05:11.140 CC lib/log/log_flags.o 00:05:11.140 CC lib/log/log_deprecated.o 00:05:11.140 LIB libspdk_ut_mock.a 00:05:11.140 LIB libspdk_ut.a 00:05:11.140 LIB libspdk_log.a 00:05:11.140 SO libspdk_ut_mock.so.6.0 00:05:11.140 SO libspdk_ut.so.2.0 00:05:11.140 SO libspdk_log.so.7.1 00:05:11.140 SYMLINK libspdk_ut_mock.so 00:05:11.140 SYMLINK libspdk_ut.so 00:05:11.140 SYMLINK libspdk_log.so 00:05:11.140 CC lib/ioat/ioat.o 00:05:11.140 CC lib/util/bit_array.o 00:05:11.140 CC lib/util/cpuset.o 00:05:11.140 CC lib/util/base64.o 00:05:11.140 CC lib/util/crc32.o 00:05:11.140 CC lib/util/crc32c.o 00:05:11.140 CC lib/util/crc16.o 00:05:11.140 CXX lib/trace_parser/trace.o 00:05:11.140 CC lib/dma/dma.o 00:05:11.140 CC lib/vfio_user/host/vfio_user_pci.o 00:05:11.140 CC lib/vfio_user/host/vfio_user.o 00:05:11.140 CC lib/util/crc32_ieee.o 00:05:11.140 CC lib/util/crc64.o 00:05:11.140 CC lib/util/dif.o 00:05:11.140 CC lib/util/fd.o 00:05:11.140 LIB libspdk_dma.a 00:05:11.140 CC lib/util/fd_group.o 00:05:11.140 CC lib/util/file.o 00:05:11.140 SO libspdk_dma.so.5.0 00:05:11.140 LIB libspdk_ioat.a 00:05:11.140 SO libspdk_ioat.so.7.0 00:05:11.140 CC lib/util/hexlify.o 00:05:11.140 SYMLINK libspdk_dma.so 00:05:11.140 CC lib/util/iov.o 00:05:11.140 LIB libspdk_vfio_user.a 00:05:11.140 CC lib/util/math.o 00:05:11.140 SYMLINK libspdk_ioat.so 00:05:11.140 CC lib/util/net.o 00:05:11.140 SO libspdk_vfio_user.so.5.0 00:05:11.140 CC lib/util/pipe.o 00:05:11.140 CC lib/util/strerror_tls.o 00:05:11.140 SYMLINK libspdk_vfio_user.so 00:05:11.140 CC lib/util/string.o 00:05:11.140 CC lib/util/uuid.o 00:05:11.140 CC lib/util/xor.o 00:05:11.140 CC lib/util/zipf.o 00:05:11.140 CC lib/util/md5.o 00:05:11.140 LIB libspdk_util.a 00:05:11.140 SO libspdk_util.so.10.1 00:05:11.140 LIB libspdk_trace_parser.a 00:05:11.140 SYMLINK libspdk_util.so 00:05:11.140 SO libspdk_trace_parser.so.6.0 00:05:11.140 SYMLINK libspdk_trace_parser.so 00:05:11.140 CC lib/idxd/idxd.o 00:05:11.140 CC lib/idxd/idxd_user.o 00:05:11.140 CC lib/idxd/idxd_kernel.o 00:05:11.140 CC lib/json/json_parse.o 00:05:11.140 CC lib/conf/conf.o 00:05:11.140 CC lib/json/json_util.o 00:05:11.140 CC lib/vmd/vmd.o 00:05:11.140 CC lib/json/json_write.o 00:05:11.140 CC lib/rdma_utils/rdma_utils.o 00:05:11.140 CC lib/env_dpdk/env.o 00:05:11.140 CC lib/vmd/led.o 00:05:11.140 LIB libspdk_conf.a 00:05:11.140 CC lib/env_dpdk/memory.o 00:05:11.140 CC lib/env_dpdk/pci.o 00:05:11.140 CC lib/env_dpdk/init.o 00:05:11.140 SO libspdk_conf.so.6.0 00:05:11.140 LIB libspdk_rdma_utils.a 00:05:11.140 LIB libspdk_json.a 00:05:11.140 SO libspdk_rdma_utils.so.1.0 00:05:11.140 SYMLINK libspdk_conf.so 00:05:11.140 CC lib/env_dpdk/threads.o 00:05:11.140 CC lib/env_dpdk/pci_ioat.o 00:05:11.140 SO libspdk_json.so.6.0 00:05:11.140 SYMLINK libspdk_rdma_utils.so 00:05:11.140 SYMLINK libspdk_json.so 00:05:11.140 CC lib/env_dpdk/pci_virtio.o 00:05:11.140 CC lib/env_dpdk/pci_vmd.o 00:05:11.140 CC lib/env_dpdk/pci_idxd.o 00:05:11.140 CC lib/rdma_provider/common.o 00:05:11.140 LIB libspdk_idxd.a 00:05:11.140 CC lib/env_dpdk/pci_event.o 00:05:11.140 CC lib/env_dpdk/sigbus_handler.o 00:05:11.140 SO libspdk_idxd.so.12.1 00:05:11.140 CC lib/env_dpdk/pci_dpdk.o 00:05:11.140 CC lib/jsonrpc/jsonrpc_server.o 00:05:11.140 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:11.140 LIB libspdk_vmd.a 00:05:11.140 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:11.140 SYMLINK libspdk_idxd.so 00:05:11.140 CC lib/jsonrpc/jsonrpc_client.o 00:05:11.140 SO libspdk_vmd.so.6.0 00:05:11.140 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:11.140 SYMLINK libspdk_vmd.so 00:05:11.140 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:11.140 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:11.140 LIB libspdk_rdma_provider.a 00:05:11.140 LIB libspdk_jsonrpc.a 00:05:11.140 SO libspdk_rdma_provider.so.7.0 00:05:11.140 SO libspdk_jsonrpc.so.6.0 00:05:11.140 SYMLINK libspdk_rdma_provider.so 00:05:11.140 SYMLINK libspdk_jsonrpc.so 00:05:11.140 CC lib/rpc/rpc.o 00:05:11.140 LIB libspdk_env_dpdk.a 00:05:11.140 SO libspdk_env_dpdk.so.15.1 00:05:11.140 LIB libspdk_rpc.a 00:05:11.140 SO libspdk_rpc.so.6.0 00:05:11.140 SYMLINK libspdk_rpc.so 00:05:11.140 SYMLINK libspdk_env_dpdk.so 00:05:11.140 CC lib/trace/trace.o 00:05:11.140 CC lib/trace/trace_flags.o 00:05:11.140 CC lib/trace/trace_rpc.o 00:05:11.140 CC lib/notify/notify.o 00:05:11.140 CC lib/notify/notify_rpc.o 00:05:11.140 CC lib/keyring/keyring.o 00:05:11.140 CC lib/keyring/keyring_rpc.o 00:05:11.140 LIB libspdk_notify.a 00:05:11.140 SO libspdk_notify.so.6.0 00:05:11.140 LIB libspdk_trace.a 00:05:11.140 LIB libspdk_keyring.a 00:05:11.140 SYMLINK libspdk_notify.so 00:05:11.140 SO libspdk_trace.so.11.0 00:05:11.140 SO libspdk_keyring.so.2.0 00:05:11.140 SYMLINK libspdk_trace.so 00:05:11.140 SYMLINK libspdk_keyring.so 00:05:11.140 CC lib/thread/iobuf.o 00:05:11.140 CC lib/thread/thread.o 00:05:11.140 CC lib/sock/sock_rpc.o 00:05:11.140 CC lib/sock/sock.o 00:05:11.140 LIB libspdk_sock.a 00:05:11.140 SO libspdk_sock.so.10.0 00:05:11.140 SYMLINK libspdk_sock.so 00:05:11.399 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:11.399 CC lib/nvme/nvme_fabric.o 00:05:11.399 CC lib/nvme/nvme_ctrlr.o 00:05:11.399 CC lib/nvme/nvme_ns_cmd.o 00:05:11.399 CC lib/nvme/nvme_ns.o 00:05:11.399 CC lib/nvme/nvme_pcie.o 00:05:11.399 CC lib/nvme/nvme_pcie_common.o 00:05:11.399 CC lib/nvme/nvme.o 00:05:11.399 CC lib/nvme/nvme_qpair.o 00:05:12.334 LIB libspdk_thread.a 00:05:12.334 SO libspdk_thread.so.11.0 00:05:12.334 CC lib/nvme/nvme_quirks.o 00:05:12.334 SYMLINK libspdk_thread.so 00:05:12.334 CC lib/nvme/nvme_transport.o 00:05:12.334 CC lib/nvme/nvme_discovery.o 00:05:12.334 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:12.334 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:12.334 CC lib/nvme/nvme_tcp.o 00:05:12.592 CC lib/nvme/nvme_opal.o 00:05:12.592 CC lib/nvme/nvme_io_msg.o 00:05:12.592 CC lib/nvme/nvme_poll_group.o 00:05:12.850 CC lib/nvme/nvme_zns.o 00:05:13.107 CC lib/nvme/nvme_stubs.o 00:05:13.107 CC lib/nvme/nvme_auth.o 00:05:13.107 CC lib/nvme/nvme_cuse.o 00:05:13.107 CC lib/nvme/nvme_rdma.o 00:05:13.365 CC lib/accel/accel.o 00:05:13.365 CC lib/blob/blobstore.o 00:05:13.365 CC lib/blob/request.o 00:05:13.624 CC lib/blob/zeroes.o 00:05:13.624 CC lib/accel/accel_rpc.o 00:05:13.624 CC lib/blob/blob_bs_dev.o 00:05:13.881 CC lib/accel/accel_sw.o 00:05:13.881 CC lib/init/json_config.o 00:05:13.881 CC lib/init/subsystem.o 00:05:14.139 CC lib/init/subsystem_rpc.o 00:05:14.139 CC lib/init/rpc.o 00:05:14.139 CC lib/virtio/virtio.o 00:05:14.139 CC lib/virtio/virtio_vhost_user.o 00:05:14.139 CC lib/virtio/virtio_vfio_user.o 00:05:14.139 CC lib/virtio/virtio_pci.o 00:05:14.397 CC lib/fsdev/fsdev.o 00:05:14.397 CC lib/fsdev/fsdev_io.o 00:05:14.397 LIB libspdk_init.a 00:05:14.397 SO libspdk_init.so.6.0 00:05:14.397 SYMLINK libspdk_init.so 00:05:14.397 CC lib/fsdev/fsdev_rpc.o 00:05:14.397 LIB libspdk_accel.a 00:05:14.397 LIB libspdk_nvme.a 00:05:14.655 SO libspdk_accel.so.16.0 00:05:14.655 LIB libspdk_virtio.a 00:05:14.655 SO libspdk_virtio.so.7.0 00:05:14.655 SYMLINK libspdk_accel.so 00:05:14.655 CC lib/event/app.o 00:05:14.655 CC lib/event/reactor.o 00:05:14.655 CC lib/event/log_rpc.o 00:05:14.655 CC lib/event/app_rpc.o 00:05:14.655 SYMLINK libspdk_virtio.so 00:05:14.655 CC lib/event/scheduler_static.o 00:05:14.655 SO libspdk_nvme.so.15.0 00:05:14.912 CC lib/bdev/bdev.o 00:05:14.912 CC lib/bdev/bdev_rpc.o 00:05:14.912 CC lib/bdev/bdev_zone.o 00:05:14.912 CC lib/bdev/part.o 00:05:14.912 CC lib/bdev/scsi_nvme.o 00:05:14.913 SYMLINK libspdk_nvme.so 00:05:14.913 LIB libspdk_fsdev.a 00:05:14.913 SO libspdk_fsdev.so.2.0 00:05:15.170 LIB libspdk_event.a 00:05:15.170 SYMLINK libspdk_fsdev.so 00:05:15.170 SO libspdk_event.so.14.0 00:05:15.170 SYMLINK libspdk_event.so 00:05:15.429 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:15.995 LIB libspdk_fuse_dispatcher.a 00:05:15.995 SO libspdk_fuse_dispatcher.so.1.0 00:05:15.995 SYMLINK libspdk_fuse_dispatcher.so 00:05:16.562 LIB libspdk_blob.a 00:05:16.820 SO libspdk_blob.so.12.0 00:05:16.820 SYMLINK libspdk_blob.so 00:05:17.078 CC lib/lvol/lvol.o 00:05:17.078 CC lib/blobfs/tree.o 00:05:17.078 CC lib/blobfs/blobfs.o 00:05:17.646 LIB libspdk_bdev.a 00:05:17.646 SO libspdk_bdev.so.17.0 00:05:17.903 SYMLINK libspdk_bdev.so 00:05:17.903 LIB libspdk_blobfs.a 00:05:17.903 SO libspdk_blobfs.so.11.0 00:05:17.903 CC lib/nvmf/ctrlr.o 00:05:17.903 CC lib/nvmf/ctrlr_discovery.o 00:05:17.903 CC lib/nvmf/ctrlr_bdev.o 00:05:17.903 LIB libspdk_lvol.a 00:05:17.903 CC lib/nbd/nbd.o 00:05:17.903 CC lib/ftl/ftl_core.o 00:05:17.903 CC lib/nvmf/subsystem.o 00:05:17.903 CC lib/ublk/ublk.o 00:05:17.903 CC lib/scsi/dev.o 00:05:18.161 SO libspdk_lvol.so.11.0 00:05:18.161 SYMLINK libspdk_blobfs.so 00:05:18.161 CC lib/ftl/ftl_init.o 00:05:18.161 SYMLINK libspdk_lvol.so 00:05:18.161 CC lib/nvmf/nvmf.o 00:05:18.419 CC lib/ftl/ftl_layout.o 00:05:18.419 CC lib/scsi/lun.o 00:05:18.420 CC lib/ublk/ublk_rpc.o 00:05:18.420 CC lib/nbd/nbd_rpc.o 00:05:18.678 CC lib/nvmf/nvmf_rpc.o 00:05:18.678 CC lib/scsi/port.o 00:05:18.678 CC lib/ftl/ftl_debug.o 00:05:18.678 LIB libspdk_nbd.a 00:05:18.678 CC lib/nvmf/transport.o 00:05:18.678 LIB libspdk_ublk.a 00:05:18.678 SO libspdk_nbd.so.7.0 00:05:18.678 SO libspdk_ublk.so.3.0 00:05:18.679 SYMLINK libspdk_nbd.so 00:05:18.679 CC lib/nvmf/tcp.o 00:05:18.679 SYMLINK libspdk_ublk.so 00:05:18.679 CC lib/nvmf/stubs.o 00:05:18.679 CC lib/nvmf/mdns_server.o 00:05:18.679 CC lib/scsi/scsi.o 00:05:18.937 CC lib/ftl/ftl_io.o 00:05:18.937 CC lib/scsi/scsi_bdev.o 00:05:19.196 CC lib/nvmf/rdma.o 00:05:19.196 CC lib/ftl/ftl_sb.o 00:05:19.196 CC lib/nvmf/auth.o 00:05:19.196 CC lib/ftl/ftl_l2p.o 00:05:19.196 CC lib/scsi/scsi_pr.o 00:05:19.454 CC lib/ftl/ftl_l2p_flat.o 00:05:19.454 CC lib/scsi/scsi_rpc.o 00:05:19.454 CC lib/scsi/task.o 00:05:19.454 CC lib/ftl/ftl_nv_cache.o 00:05:19.454 CC lib/ftl/ftl_band.o 00:05:19.454 CC lib/ftl/ftl_band_ops.o 00:05:19.714 CC lib/ftl/ftl_writer.o 00:05:19.714 CC lib/ftl/ftl_rq.o 00:05:19.714 LIB libspdk_scsi.a 00:05:19.714 SO libspdk_scsi.so.9.0 00:05:19.714 CC lib/ftl/ftl_reloc.o 00:05:19.714 SYMLINK libspdk_scsi.so 00:05:19.714 CC lib/ftl/ftl_l2p_cache.o 00:05:19.974 CC lib/ftl/ftl_p2l.o 00:05:19.974 CC lib/ftl/ftl_p2l_log.o 00:05:19.974 CC lib/ftl/mngt/ftl_mngt.o 00:05:19.974 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:20.232 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:20.232 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:20.232 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:20.232 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:20.232 CC lib/iscsi/conn.o 00:05:20.232 CC lib/iscsi/init_grp.o 00:05:20.490 CC lib/iscsi/iscsi.o 00:05:20.490 CC lib/vhost/vhost.o 00:05:20.490 CC lib/vhost/vhost_rpc.o 00:05:20.490 CC lib/vhost/vhost_scsi.o 00:05:20.490 CC lib/vhost/vhost_blk.o 00:05:20.490 CC lib/vhost/rte_vhost_user.o 00:05:20.490 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:20.748 CC lib/iscsi/param.o 00:05:21.007 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:21.007 CC lib/iscsi/portal_grp.o 00:05:21.007 CC lib/iscsi/tgt_node.o 00:05:21.007 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:21.265 CC lib/iscsi/iscsi_subsystem.o 00:05:21.265 CC lib/iscsi/iscsi_rpc.o 00:05:21.265 LIB libspdk_nvmf.a 00:05:21.265 SO libspdk_nvmf.so.20.0 00:05:21.523 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:21.523 CC lib/iscsi/task.o 00:05:21.523 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:21.523 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:21.523 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:21.523 SYMLINK libspdk_nvmf.so 00:05:21.523 CC lib/ftl/utils/ftl_conf.o 00:05:21.523 CC lib/ftl/utils/ftl_md.o 00:05:21.523 CC lib/ftl/utils/ftl_mempool.o 00:05:21.781 CC lib/ftl/utils/ftl_bitmap.o 00:05:21.781 CC lib/ftl/utils/ftl_property.o 00:05:21.781 LIB libspdk_vhost.a 00:05:21.781 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:21.781 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:21.781 LIB libspdk_iscsi.a 00:05:21.781 SO libspdk_vhost.so.8.0 00:05:21.781 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:21.781 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:21.781 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:21.781 SO libspdk_iscsi.so.8.0 00:05:22.038 SYMLINK libspdk_vhost.so 00:05:22.038 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:22.038 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:22.038 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:22.038 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:22.038 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:22.038 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:22.038 SYMLINK libspdk_iscsi.so 00:05:22.038 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:22.038 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:22.038 CC lib/ftl/base/ftl_base_dev.o 00:05:22.038 CC lib/ftl/base/ftl_base_bdev.o 00:05:22.038 CC lib/ftl/ftl_trace.o 00:05:22.603 LIB libspdk_ftl.a 00:05:22.603 SO libspdk_ftl.so.9.0 00:05:22.862 SYMLINK libspdk_ftl.so 00:05:23.196 CC module/env_dpdk/env_dpdk_rpc.o 00:05:23.454 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:23.454 CC module/sock/uring/uring.o 00:05:23.454 CC module/scheduler/gscheduler/gscheduler.o 00:05:23.454 CC module/blob/bdev/blob_bdev.o 00:05:23.454 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:23.454 CC module/accel/error/accel_error.o 00:05:23.454 CC module/sock/posix/posix.o 00:05:23.454 CC module/keyring/file/keyring.o 00:05:23.454 CC module/fsdev/aio/fsdev_aio.o 00:05:23.454 LIB libspdk_env_dpdk_rpc.a 00:05:23.454 SO libspdk_env_dpdk_rpc.so.6.0 00:05:23.454 LIB libspdk_scheduler_gscheduler.a 00:05:23.454 SYMLINK libspdk_env_dpdk_rpc.so 00:05:23.454 CC module/keyring/file/keyring_rpc.o 00:05:23.454 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:23.454 LIB libspdk_scheduler_dpdk_governor.a 00:05:23.454 SO libspdk_scheduler_gscheduler.so.4.0 00:05:23.454 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:23.712 LIB libspdk_scheduler_dynamic.a 00:05:23.712 CC module/accel/error/accel_error_rpc.o 00:05:23.712 SO libspdk_scheduler_dynamic.so.4.0 00:05:23.712 SYMLINK libspdk_scheduler_gscheduler.so 00:05:23.712 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:23.712 CC module/fsdev/aio/linux_aio_mgr.o 00:05:23.712 LIB libspdk_blob_bdev.a 00:05:23.712 SYMLINK libspdk_scheduler_dynamic.so 00:05:23.712 SO libspdk_blob_bdev.so.12.0 00:05:23.712 LIB libspdk_keyring_file.a 00:05:23.712 SO libspdk_keyring_file.so.2.0 00:05:23.712 LIB libspdk_accel_error.a 00:05:23.712 SYMLINK libspdk_blob_bdev.so 00:05:23.712 SO libspdk_accel_error.so.2.0 00:05:23.712 SYMLINK libspdk_keyring_file.so 00:05:23.712 CC module/accel/ioat/accel_ioat.o 00:05:23.712 CC module/accel/ioat/accel_ioat_rpc.o 00:05:23.971 SYMLINK libspdk_accel_error.so 00:05:23.971 CC module/accel/dsa/accel_dsa.o 00:05:23.971 CC module/keyring/linux/keyring.o 00:05:23.971 CC module/keyring/linux/keyring_rpc.o 00:05:23.971 CC module/bdev/delay/vbdev_delay.o 00:05:23.971 CC module/accel/iaa/accel_iaa.o 00:05:23.971 LIB libspdk_accel_ioat.a 00:05:23.971 CC module/accel/dsa/accel_dsa_rpc.o 00:05:23.971 LIB libspdk_fsdev_aio.a 00:05:23.971 SO libspdk_accel_ioat.so.6.0 00:05:23.971 CC module/blobfs/bdev/blobfs_bdev.o 00:05:23.971 LIB libspdk_sock_uring.a 00:05:24.229 SO libspdk_fsdev_aio.so.1.0 00:05:24.229 SO libspdk_sock_uring.so.5.0 00:05:24.230 LIB libspdk_sock_posix.a 00:05:24.230 LIB libspdk_keyring_linux.a 00:05:24.230 SYMLINK libspdk_accel_ioat.so 00:05:24.230 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:24.230 SO libspdk_keyring_linux.so.1.0 00:05:24.230 SO libspdk_sock_posix.so.6.0 00:05:24.230 SYMLINK libspdk_sock_uring.so 00:05:24.230 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:24.230 CC module/accel/iaa/accel_iaa_rpc.o 00:05:24.230 SYMLINK libspdk_fsdev_aio.so 00:05:24.230 LIB libspdk_accel_dsa.a 00:05:24.230 SYMLINK libspdk_keyring_linux.so 00:05:24.230 SO libspdk_accel_dsa.so.5.0 00:05:24.230 SYMLINK libspdk_sock_posix.so 00:05:24.230 SYMLINK libspdk_accel_dsa.so 00:05:24.230 LIB libspdk_blobfs_bdev.a 00:05:24.230 LIB libspdk_accel_iaa.a 00:05:24.488 CC module/bdev/error/vbdev_error.o 00:05:24.488 SO libspdk_blobfs_bdev.so.6.0 00:05:24.488 SO libspdk_accel_iaa.so.3.0 00:05:24.488 CC module/bdev/error/vbdev_error_rpc.o 00:05:24.488 CC module/bdev/gpt/gpt.o 00:05:24.488 SYMLINK libspdk_blobfs_bdev.so 00:05:24.488 CC module/bdev/lvol/vbdev_lvol.o 00:05:24.488 LIB libspdk_bdev_delay.a 00:05:24.488 SYMLINK libspdk_accel_iaa.so 00:05:24.488 CC module/bdev/null/bdev_null.o 00:05:24.488 CC module/bdev/malloc/bdev_malloc.o 00:05:24.488 CC module/bdev/nvme/bdev_nvme.o 00:05:24.488 SO libspdk_bdev_delay.so.6.0 00:05:24.488 SYMLINK libspdk_bdev_delay.so 00:05:24.488 CC module/bdev/null/bdev_null_rpc.o 00:05:24.488 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:24.488 CC module/bdev/passthru/vbdev_passthru.o 00:05:24.746 CC module/bdev/raid/bdev_raid.o 00:05:24.746 CC module/bdev/gpt/vbdev_gpt.o 00:05:24.746 LIB libspdk_bdev_error.a 00:05:24.746 SO libspdk_bdev_error.so.6.0 00:05:24.746 CC module/bdev/raid/bdev_raid_rpc.o 00:05:24.746 SYMLINK libspdk_bdev_error.so 00:05:24.746 LIB libspdk_bdev_null.a 00:05:24.746 CC module/bdev/raid/bdev_raid_sb.o 00:05:24.746 SO libspdk_bdev_null.so.6.0 00:05:25.005 SYMLINK libspdk_bdev_null.so 00:05:25.005 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:25.005 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:25.005 LIB libspdk_bdev_gpt.a 00:05:25.005 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:25.005 SO libspdk_bdev_gpt.so.6.0 00:05:25.005 CC module/bdev/nvme/nvme_rpc.o 00:05:25.005 CC module/bdev/nvme/bdev_mdns_client.o 00:05:25.005 SYMLINK libspdk_bdev_gpt.so 00:05:25.264 LIB libspdk_bdev_malloc.a 00:05:25.264 LIB libspdk_bdev_passthru.a 00:05:25.264 SO libspdk_bdev_malloc.so.6.0 00:05:25.264 SO libspdk_bdev_passthru.so.6.0 00:05:25.264 CC module/bdev/split/vbdev_split.o 00:05:25.264 CC module/bdev/nvme/vbdev_opal.o 00:05:25.264 SYMLINK libspdk_bdev_malloc.so 00:05:25.264 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:25.264 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:25.264 SYMLINK libspdk_bdev_passthru.so 00:05:25.264 LIB libspdk_bdev_lvol.a 00:05:25.264 CC module/bdev/split/vbdev_split_rpc.o 00:05:25.264 SO libspdk_bdev_lvol.so.6.0 00:05:25.264 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:25.264 SYMLINK libspdk_bdev_lvol.so 00:05:25.522 CC module/bdev/uring/bdev_uring.o 00:05:25.522 CC module/bdev/uring/bdev_uring_rpc.o 00:05:25.522 CC module/bdev/raid/raid0.o 00:05:25.522 LIB libspdk_bdev_split.a 00:05:25.522 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:25.522 SO libspdk_bdev_split.so.6.0 00:05:25.522 CC module/bdev/aio/bdev_aio.o 00:05:25.522 SYMLINK libspdk_bdev_split.so 00:05:25.781 CC module/bdev/ftl/bdev_ftl.o 00:05:25.781 CC module/bdev/aio/bdev_aio_rpc.o 00:05:25.781 CC module/bdev/raid/raid1.o 00:05:25.781 LIB libspdk_bdev_zone_block.a 00:05:25.781 SO libspdk_bdev_zone_block.so.6.0 00:05:25.781 CC module/bdev/raid/concat.o 00:05:25.781 SYMLINK libspdk_bdev_zone_block.so 00:05:25.781 CC module/bdev/iscsi/bdev_iscsi.o 00:05:25.781 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:25.781 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:25.781 LIB libspdk_bdev_uring.a 00:05:25.781 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:25.781 SO libspdk_bdev_uring.so.6.0 00:05:26.039 LIB libspdk_bdev_aio.a 00:05:26.039 SYMLINK libspdk_bdev_uring.so 00:05:26.039 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:26.039 SO libspdk_bdev_aio.so.6.0 00:05:26.039 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:26.039 LIB libspdk_bdev_raid.a 00:05:26.040 SYMLINK libspdk_bdev_aio.so 00:05:26.040 SO libspdk_bdev_raid.so.6.0 00:05:26.040 SYMLINK libspdk_bdev_raid.so 00:05:26.298 LIB libspdk_bdev_iscsi.a 00:05:26.298 LIB libspdk_bdev_ftl.a 00:05:26.298 SO libspdk_bdev_iscsi.so.6.0 00:05:26.298 SO libspdk_bdev_ftl.so.6.0 00:05:26.298 SYMLINK libspdk_bdev_iscsi.so 00:05:26.298 SYMLINK libspdk_bdev_ftl.so 00:05:26.298 LIB libspdk_bdev_virtio.a 00:05:26.298 SO libspdk_bdev_virtio.so.6.0 00:05:26.558 SYMLINK libspdk_bdev_virtio.so 00:05:27.126 LIB libspdk_bdev_nvme.a 00:05:27.126 SO libspdk_bdev_nvme.so.7.1 00:05:27.384 SYMLINK libspdk_bdev_nvme.so 00:05:27.643 CC module/event/subsystems/keyring/keyring.o 00:05:27.643 CC module/event/subsystems/fsdev/fsdev.o 00:05:27.643 CC module/event/subsystems/vmd/vmd.o 00:05:27.643 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:27.643 CC module/event/subsystems/sock/sock.o 00:05:27.643 CC module/event/subsystems/iobuf/iobuf.o 00:05:27.902 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:27.902 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:27.902 CC module/event/subsystems/scheduler/scheduler.o 00:05:27.902 LIB libspdk_event_fsdev.a 00:05:27.902 LIB libspdk_event_keyring.a 00:05:27.902 LIB libspdk_event_vhost_blk.a 00:05:27.902 LIB libspdk_event_sock.a 00:05:27.902 LIB libspdk_event_iobuf.a 00:05:27.902 LIB libspdk_event_vmd.a 00:05:27.902 SO libspdk_event_fsdev.so.1.0 00:05:27.902 SO libspdk_event_vhost_blk.so.3.0 00:05:27.902 SO libspdk_event_sock.so.5.0 00:05:27.902 SO libspdk_event_keyring.so.1.0 00:05:27.902 LIB libspdk_event_scheduler.a 00:05:27.902 SO libspdk_event_iobuf.so.3.0 00:05:27.902 SO libspdk_event_vmd.so.6.0 00:05:27.902 SO libspdk_event_scheduler.so.4.0 00:05:27.902 SYMLINK libspdk_event_fsdev.so 00:05:27.902 SYMLINK libspdk_event_vhost_blk.so 00:05:27.902 SYMLINK libspdk_event_sock.so 00:05:27.902 SYMLINK libspdk_event_keyring.so 00:05:27.902 SYMLINK libspdk_event_iobuf.so 00:05:28.160 SYMLINK libspdk_event_vmd.so 00:05:28.160 SYMLINK libspdk_event_scheduler.so 00:05:28.160 CC module/event/subsystems/accel/accel.o 00:05:28.418 LIB libspdk_event_accel.a 00:05:28.676 SO libspdk_event_accel.so.6.0 00:05:28.676 SYMLINK libspdk_event_accel.so 00:05:28.935 CC module/event/subsystems/bdev/bdev.o 00:05:29.193 LIB libspdk_event_bdev.a 00:05:29.193 SO libspdk_event_bdev.so.6.0 00:05:29.193 SYMLINK libspdk_event_bdev.so 00:05:29.451 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:29.451 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:29.451 CC module/event/subsystems/nbd/nbd.o 00:05:29.451 CC module/event/subsystems/scsi/scsi.o 00:05:29.451 CC module/event/subsystems/ublk/ublk.o 00:05:29.709 LIB libspdk_event_nbd.a 00:05:29.709 LIB libspdk_event_ublk.a 00:05:29.709 LIB libspdk_event_scsi.a 00:05:29.709 SO libspdk_event_nbd.so.6.0 00:05:29.709 SO libspdk_event_ublk.so.3.0 00:05:29.709 SO libspdk_event_scsi.so.6.0 00:05:29.709 SYMLINK libspdk_event_scsi.so 00:05:29.709 SYMLINK libspdk_event_nbd.so 00:05:29.710 SYMLINK libspdk_event_ublk.so 00:05:29.710 LIB libspdk_event_nvmf.a 00:05:29.710 SO libspdk_event_nvmf.so.6.0 00:05:29.968 SYMLINK libspdk_event_nvmf.so 00:05:29.968 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:29.968 CC module/event/subsystems/iscsi/iscsi.o 00:05:30.226 LIB libspdk_event_vhost_scsi.a 00:05:30.226 SO libspdk_event_vhost_scsi.so.3.0 00:05:30.226 LIB libspdk_event_iscsi.a 00:05:30.226 SO libspdk_event_iscsi.so.6.0 00:05:30.226 SYMLINK libspdk_event_vhost_scsi.so 00:05:30.226 SYMLINK libspdk_event_iscsi.so 00:05:30.485 SO libspdk.so.6.0 00:05:30.485 SYMLINK libspdk.so 00:05:30.743 TEST_HEADER include/spdk/accel.h 00:05:30.743 CC app/trace_record/trace_record.o 00:05:30.743 CXX app/trace/trace.o 00:05:30.743 TEST_HEADER include/spdk/accel_module.h 00:05:30.743 TEST_HEADER include/spdk/assert.h 00:05:30.743 TEST_HEADER include/spdk/barrier.h 00:05:30.743 TEST_HEADER include/spdk/base64.h 00:05:30.743 TEST_HEADER include/spdk/bdev.h 00:05:30.743 TEST_HEADER include/spdk/bdev_module.h 00:05:30.743 TEST_HEADER include/spdk/bdev_zone.h 00:05:30.743 TEST_HEADER include/spdk/bit_array.h 00:05:30.743 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:30.743 TEST_HEADER include/spdk/bit_pool.h 00:05:30.743 TEST_HEADER include/spdk/blob_bdev.h 00:05:30.743 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:30.743 TEST_HEADER include/spdk/blobfs.h 00:05:30.743 TEST_HEADER include/spdk/blob.h 00:05:30.743 TEST_HEADER include/spdk/conf.h 00:05:30.743 TEST_HEADER include/spdk/config.h 00:05:30.743 TEST_HEADER include/spdk/cpuset.h 00:05:30.743 TEST_HEADER include/spdk/crc16.h 00:05:30.743 TEST_HEADER include/spdk/crc32.h 00:05:30.743 TEST_HEADER include/spdk/crc64.h 00:05:30.743 TEST_HEADER include/spdk/dif.h 00:05:30.743 TEST_HEADER include/spdk/dma.h 00:05:30.743 TEST_HEADER include/spdk/endian.h 00:05:30.743 TEST_HEADER include/spdk/env_dpdk.h 00:05:30.743 TEST_HEADER include/spdk/env.h 00:05:30.743 TEST_HEADER include/spdk/event.h 00:05:30.743 TEST_HEADER include/spdk/fd_group.h 00:05:30.743 TEST_HEADER include/spdk/fd.h 00:05:30.743 TEST_HEADER include/spdk/file.h 00:05:30.743 TEST_HEADER include/spdk/fsdev.h 00:05:30.743 TEST_HEADER include/spdk/fsdev_module.h 00:05:30.743 TEST_HEADER include/spdk/ftl.h 00:05:30.743 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:30.743 TEST_HEADER include/spdk/gpt_spec.h 00:05:30.743 TEST_HEADER include/spdk/hexlify.h 00:05:30.743 TEST_HEADER include/spdk/histogram_data.h 00:05:30.743 TEST_HEADER include/spdk/idxd.h 00:05:30.743 TEST_HEADER include/spdk/idxd_spec.h 00:05:30.743 CC test/thread/poller_perf/poller_perf.o 00:05:30.743 TEST_HEADER include/spdk/init.h 00:05:30.743 TEST_HEADER include/spdk/ioat.h 00:05:30.743 TEST_HEADER include/spdk/ioat_spec.h 00:05:30.743 TEST_HEADER include/spdk/iscsi_spec.h 00:05:30.743 TEST_HEADER include/spdk/json.h 00:05:30.743 CC examples/util/zipf/zipf.o 00:05:30.743 TEST_HEADER include/spdk/jsonrpc.h 00:05:30.743 TEST_HEADER include/spdk/keyring.h 00:05:30.743 TEST_HEADER include/spdk/keyring_module.h 00:05:30.743 TEST_HEADER include/spdk/likely.h 00:05:30.743 CC examples/ioat/perf/perf.o 00:05:30.743 TEST_HEADER include/spdk/log.h 00:05:30.743 TEST_HEADER include/spdk/lvol.h 00:05:30.743 TEST_HEADER include/spdk/md5.h 00:05:30.743 TEST_HEADER include/spdk/memory.h 00:05:30.743 TEST_HEADER include/spdk/mmio.h 00:05:30.743 TEST_HEADER include/spdk/nbd.h 00:05:30.743 TEST_HEADER include/spdk/net.h 00:05:30.744 TEST_HEADER include/spdk/notify.h 00:05:30.744 TEST_HEADER include/spdk/nvme.h 00:05:30.744 TEST_HEADER include/spdk/nvme_intel.h 00:05:30.744 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:30.744 CC test/dma/test_dma/test_dma.o 00:05:30.744 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:30.744 TEST_HEADER include/spdk/nvme_spec.h 00:05:30.744 TEST_HEADER include/spdk/nvme_zns.h 00:05:31.002 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:31.002 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:31.002 TEST_HEADER include/spdk/nvmf.h 00:05:31.002 TEST_HEADER include/spdk/nvmf_spec.h 00:05:31.002 TEST_HEADER include/spdk/nvmf_transport.h 00:05:31.002 TEST_HEADER include/spdk/opal.h 00:05:31.002 TEST_HEADER include/spdk/opal_spec.h 00:05:31.002 TEST_HEADER include/spdk/pci_ids.h 00:05:31.002 TEST_HEADER include/spdk/pipe.h 00:05:31.002 TEST_HEADER include/spdk/queue.h 00:05:31.002 CC test/app/bdev_svc/bdev_svc.o 00:05:31.002 TEST_HEADER include/spdk/reduce.h 00:05:31.002 TEST_HEADER include/spdk/rpc.h 00:05:31.002 TEST_HEADER include/spdk/scheduler.h 00:05:31.002 TEST_HEADER include/spdk/scsi.h 00:05:31.002 TEST_HEADER include/spdk/scsi_spec.h 00:05:31.002 TEST_HEADER include/spdk/sock.h 00:05:31.002 TEST_HEADER include/spdk/stdinc.h 00:05:31.002 TEST_HEADER include/spdk/string.h 00:05:31.002 CC test/env/mem_callbacks/mem_callbacks.o 00:05:31.002 TEST_HEADER include/spdk/thread.h 00:05:31.002 TEST_HEADER include/spdk/trace.h 00:05:31.002 TEST_HEADER include/spdk/trace_parser.h 00:05:31.002 TEST_HEADER include/spdk/tree.h 00:05:31.002 TEST_HEADER include/spdk/ublk.h 00:05:31.002 TEST_HEADER include/spdk/util.h 00:05:31.002 TEST_HEADER include/spdk/uuid.h 00:05:31.002 TEST_HEADER include/spdk/version.h 00:05:31.002 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:31.002 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:31.002 TEST_HEADER include/spdk/vhost.h 00:05:31.002 TEST_HEADER include/spdk/vmd.h 00:05:31.002 TEST_HEADER include/spdk/xor.h 00:05:31.002 TEST_HEADER include/spdk/zipf.h 00:05:31.002 CXX test/cpp_headers/accel.o 00:05:31.002 LINK interrupt_tgt 00:05:31.002 LINK poller_perf 00:05:31.002 LINK spdk_trace_record 00:05:31.002 LINK ioat_perf 00:05:31.002 LINK zipf 00:05:31.260 LINK mem_callbacks 00:05:31.260 LINK bdev_svc 00:05:31.260 CXX test/cpp_headers/accel_module.o 00:05:31.260 LINK spdk_trace 00:05:31.260 CXX test/cpp_headers/assert.o 00:05:31.260 CXX test/cpp_headers/barrier.o 00:05:31.260 CXX test/cpp_headers/base64.o 00:05:31.260 CC examples/ioat/verify/verify.o 00:05:31.260 CC test/env/vtophys/vtophys.o 00:05:31.518 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:31.518 CXX test/cpp_headers/bdev.o 00:05:31.518 CC test/env/memory/memory_ut.o 00:05:31.518 CC test/env/pci/pci_ut.o 00:05:31.518 LINK test_dma 00:05:31.518 CC app/nvmf_tgt/nvmf_main.o 00:05:31.518 LINK vtophys 00:05:31.518 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:31.518 LINK env_dpdk_post_init 00:05:31.518 LINK verify 00:05:31.518 CXX test/cpp_headers/bdev_module.o 00:05:31.776 CC examples/thread/thread/thread_ex.o 00:05:31.776 LINK nvmf_tgt 00:05:31.776 CXX test/cpp_headers/bdev_zone.o 00:05:31.776 CXX test/cpp_headers/bit_array.o 00:05:32.034 LINK pci_ut 00:05:32.034 CC app/iscsi_tgt/iscsi_tgt.o 00:05:32.034 CC test/app/histogram_perf/histogram_perf.o 00:05:32.034 LINK thread 00:05:32.034 CXX test/cpp_headers/bit_pool.o 00:05:32.034 CC examples/sock/hello_world/hello_sock.o 00:05:32.034 LINK nvme_fuzz 00:05:32.034 CC examples/vmd/lsvmd/lsvmd.o 00:05:32.034 CC examples/idxd/perf/perf.o 00:05:32.034 LINK histogram_perf 00:05:32.293 LINK iscsi_tgt 00:05:32.293 CXX test/cpp_headers/blob_bdev.o 00:05:32.293 LINK hello_sock 00:05:32.293 LINK lsvmd 00:05:32.293 LINK memory_ut 00:05:32.293 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:32.293 CC app/spdk_lspci/spdk_lspci.o 00:05:32.293 CC app/spdk_tgt/spdk_tgt.o 00:05:32.293 CXX test/cpp_headers/blobfs_bdev.o 00:05:32.293 CC app/spdk_nvme_perf/perf.o 00:05:32.551 LINK spdk_lspci 00:05:32.551 CC test/app/jsoncat/jsoncat.o 00:05:32.551 LINK idxd_perf 00:05:32.551 CC test/app/stub/stub.o 00:05:32.551 CC examples/vmd/led/led.o 00:05:32.551 LINK spdk_tgt 00:05:32.551 CXX test/cpp_headers/blobfs.o 00:05:32.551 LINK jsoncat 00:05:32.809 CC examples/accel/perf/accel_perf.o 00:05:32.809 LINK stub 00:05:32.809 LINK led 00:05:32.809 CC app/spdk_nvme_identify/identify.o 00:05:32.809 CXX test/cpp_headers/blob.o 00:05:32.809 CC examples/blob/hello_world/hello_blob.o 00:05:32.809 CC app/spdk_nvme_discover/discovery_aer.o 00:05:33.067 CC examples/nvme/hello_world/hello_world.o 00:05:33.067 CXX test/cpp_headers/conf.o 00:05:33.067 CC examples/nvme/reconnect/reconnect.o 00:05:33.067 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:33.067 LINK spdk_nvme_discover 00:05:33.067 LINK hello_blob 00:05:33.325 LINK accel_perf 00:05:33.325 CXX test/cpp_headers/config.o 00:05:33.325 LINK hello_world 00:05:33.325 CXX test/cpp_headers/cpuset.o 00:05:33.325 CXX test/cpp_headers/crc16.o 00:05:33.325 LINK reconnect 00:05:33.325 LINK spdk_nvme_perf 00:05:33.325 CXX test/cpp_headers/crc32.o 00:05:33.325 CXX test/cpp_headers/crc64.o 00:05:33.583 CC examples/blob/cli/blobcli.o 00:05:33.583 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:33.583 LINK nvme_manage 00:05:33.583 LINK spdk_nvme_identify 00:05:33.583 CXX test/cpp_headers/dif.o 00:05:33.583 CXX test/cpp_headers/dma.o 00:05:33.583 CC test/rpc_client/rpc_client_test.o 00:05:33.840 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:33.840 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:33.840 CXX test/cpp_headers/endian.o 00:05:33.840 CC examples/bdev/hello_world/hello_bdev.o 00:05:33.840 CXX test/cpp_headers/env_dpdk.o 00:05:33.840 CC examples/nvme/arbitration/arbitration.o 00:05:33.840 LINK rpc_client_test 00:05:33.840 CC app/spdk_top/spdk_top.o 00:05:34.097 CXX test/cpp_headers/env.o 00:05:34.097 LINK iscsi_fuzz 00:05:34.097 LINK blobcli 00:05:34.097 LINK hello_fsdev 00:05:34.097 LINK hello_bdev 00:05:34.097 CC examples/nvme/hotplug/hotplug.o 00:05:34.097 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:34.097 LINK vhost_fuzz 00:05:34.097 CXX test/cpp_headers/event.o 00:05:34.355 LINK arbitration 00:05:34.355 CC examples/nvme/abort/abort.o 00:05:34.355 LINK hotplug 00:05:34.355 LINK cmb_copy 00:05:34.355 CXX test/cpp_headers/fd_group.o 00:05:34.613 CC examples/bdev/bdevperf/bdevperf.o 00:05:34.613 CC test/blobfs/mkfs/mkfs.o 00:05:34.613 CC test/accel/dif/dif.o 00:05:34.613 CC test/event/event_perf/event_perf.o 00:05:34.613 CC test/event/reactor/reactor.o 00:05:34.613 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:34.613 CXX test/cpp_headers/fd.o 00:05:34.871 LINK event_perf 00:05:34.871 LINK reactor 00:05:34.871 LINK mkfs 00:05:34.871 CC app/vhost/vhost.o 00:05:34.871 LINK abort 00:05:34.871 CXX test/cpp_headers/file.o 00:05:34.871 LINK pmr_persistence 00:05:34.871 LINK spdk_top 00:05:34.871 CC test/event/reactor_perf/reactor_perf.o 00:05:35.130 LINK vhost 00:05:35.130 CC test/event/app_repeat/app_repeat.o 00:05:35.130 CXX test/cpp_headers/fsdev.o 00:05:35.130 CC test/event/scheduler/scheduler.o 00:05:35.130 LINK reactor_perf 00:05:35.130 LINK app_repeat 00:05:35.130 CC app/spdk_dd/spdk_dd.o 00:05:35.130 LINK dif 00:05:35.130 CXX test/cpp_headers/fsdev_module.o 00:05:35.388 CC test/lvol/esnap/esnap.o 00:05:35.388 CC test/nvme/aer/aer.o 00:05:35.388 CC test/nvme/reset/reset.o 00:05:35.388 LINK scheduler 00:05:35.388 CXX test/cpp_headers/ftl.o 00:05:35.388 LINK bdevperf 00:05:35.388 CC test/nvme/sgl/sgl.o 00:05:35.388 CC test/nvme/e2edp/nvme_dp.o 00:05:35.646 CC test/nvme/overhead/overhead.o 00:05:35.646 LINK aer 00:05:35.646 LINK reset 00:05:35.646 CXX test/cpp_headers/fuse_dispatcher.o 00:05:35.646 CC test/nvme/err_injection/err_injection.o 00:05:35.646 LINK spdk_dd 00:05:35.646 LINK sgl 00:05:35.646 LINK nvme_dp 00:05:35.904 CXX test/cpp_headers/gpt_spec.o 00:05:35.904 CC test/nvme/startup/startup.o 00:05:35.904 LINK overhead 00:05:35.904 CC examples/nvmf/nvmf/nvmf.o 00:05:35.904 LINK err_injection 00:05:35.904 CC app/fio/nvme/fio_plugin.o 00:05:35.904 CC test/nvme/reserve/reserve.o 00:05:35.904 CC test/nvme/simple_copy/simple_copy.o 00:05:35.904 CXX test/cpp_headers/hexlify.o 00:05:35.904 CC test/nvme/connect_stress/connect_stress.o 00:05:36.163 LINK startup 00:05:36.163 CC test/nvme/boot_partition/boot_partition.o 00:05:36.163 CC test/nvme/compliance/nvme_compliance.o 00:05:36.163 LINK nvmf 00:05:36.163 CXX test/cpp_headers/histogram_data.o 00:05:36.163 LINK reserve 00:05:36.163 LINK connect_stress 00:05:36.163 LINK simple_copy 00:05:36.424 LINK boot_partition 00:05:36.424 CC test/nvme/fused_ordering/fused_ordering.o 00:05:36.424 CXX test/cpp_headers/idxd.o 00:05:36.424 CXX test/cpp_headers/idxd_spec.o 00:05:36.424 LINK nvme_compliance 00:05:36.424 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:36.424 CC test/nvme/fdp/fdp.o 00:05:36.424 LINK fused_ordering 00:05:36.424 CC test/nvme/cuse/cuse.o 00:05:36.704 LINK spdk_nvme 00:05:36.704 CXX test/cpp_headers/init.o 00:05:36.704 CXX test/cpp_headers/ioat.o 00:05:36.704 CXX test/cpp_headers/ioat_spec.o 00:05:36.704 CC test/bdev/bdevio/bdevio.o 00:05:36.704 CXX test/cpp_headers/iscsi_spec.o 00:05:36.704 CC app/fio/bdev/fio_plugin.o 00:05:36.704 LINK doorbell_aers 00:05:36.977 CXX test/cpp_headers/json.o 00:05:36.977 CXX test/cpp_headers/jsonrpc.o 00:05:36.977 CXX test/cpp_headers/keyring.o 00:05:36.977 CXX test/cpp_headers/keyring_module.o 00:05:36.977 LINK fdp 00:05:36.977 CXX test/cpp_headers/likely.o 00:05:36.977 CXX test/cpp_headers/log.o 00:05:36.977 CXX test/cpp_headers/lvol.o 00:05:36.977 CXX test/cpp_headers/md5.o 00:05:36.977 CXX test/cpp_headers/memory.o 00:05:36.977 CXX test/cpp_headers/mmio.o 00:05:36.977 CXX test/cpp_headers/nbd.o 00:05:36.977 LINK bdevio 00:05:36.977 CXX test/cpp_headers/net.o 00:05:37.236 CXX test/cpp_headers/notify.o 00:05:37.236 CXX test/cpp_headers/nvme.o 00:05:37.236 CXX test/cpp_headers/nvme_intel.o 00:05:37.236 CXX test/cpp_headers/nvme_ocssd.o 00:05:37.236 LINK spdk_bdev 00:05:37.236 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:37.236 CXX test/cpp_headers/nvme_spec.o 00:05:37.236 CXX test/cpp_headers/nvme_zns.o 00:05:37.494 CXX test/cpp_headers/nvmf_cmd.o 00:05:37.494 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:37.494 CXX test/cpp_headers/nvmf.o 00:05:37.494 CXX test/cpp_headers/nvmf_spec.o 00:05:37.494 CXX test/cpp_headers/nvmf_transport.o 00:05:37.494 CXX test/cpp_headers/opal.o 00:05:37.494 CXX test/cpp_headers/opal_spec.o 00:05:37.494 CXX test/cpp_headers/pci_ids.o 00:05:37.494 CXX test/cpp_headers/pipe.o 00:05:37.494 CXX test/cpp_headers/queue.o 00:05:37.495 CXX test/cpp_headers/reduce.o 00:05:37.495 CXX test/cpp_headers/rpc.o 00:05:37.753 CXX test/cpp_headers/scheduler.o 00:05:37.753 CXX test/cpp_headers/scsi.o 00:05:37.753 CXX test/cpp_headers/scsi_spec.o 00:05:37.753 CXX test/cpp_headers/sock.o 00:05:37.753 CXX test/cpp_headers/stdinc.o 00:05:37.753 CXX test/cpp_headers/string.o 00:05:37.753 CXX test/cpp_headers/thread.o 00:05:37.753 CXX test/cpp_headers/trace.o 00:05:37.753 CXX test/cpp_headers/trace_parser.o 00:05:37.753 CXX test/cpp_headers/tree.o 00:05:37.753 CXX test/cpp_headers/ublk.o 00:05:37.753 CXX test/cpp_headers/util.o 00:05:37.753 CXX test/cpp_headers/uuid.o 00:05:37.753 CXX test/cpp_headers/version.o 00:05:38.011 LINK cuse 00:05:38.011 CXX test/cpp_headers/vfio_user_pci.o 00:05:38.011 CXX test/cpp_headers/vfio_user_spec.o 00:05:38.012 CXX test/cpp_headers/vhost.o 00:05:38.012 CXX test/cpp_headers/vmd.o 00:05:38.012 CXX test/cpp_headers/xor.o 00:05:38.012 CXX test/cpp_headers/zipf.o 00:05:40.544 LINK esnap 00:05:40.544 00:05:40.544 real 1m27.215s 00:05:40.544 user 7m1.334s 00:05:40.544 sys 1m8.839s 00:05:40.544 16:11:05 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:40.544 ************************************ 00:05:40.544 END TEST make 00:05:40.544 ************************************ 00:05:40.544 16:11:05 make -- common/autotest_common.sh@10 -- $ set +x 00:05:40.544 16:11:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:40.544 16:11:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:40.544 16:11:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:40.544 16:11:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:40.544 16:11:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:40.544 16:11:05 -- pm/common@44 -- $ pid=6042 00:05:40.544 16:11:05 -- pm/common@50 -- $ kill -TERM 6042 00:05:40.544 16:11:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:40.544 16:11:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:40.544 16:11:05 -- pm/common@44 -- $ pid=6044 00:05:40.544 16:11:05 -- pm/common@50 -- $ kill -TERM 6044 00:05:40.544 16:11:05 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:40.544 16:11:05 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:40.544 16:11:06 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.544 16:11:06 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.544 16:11:06 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.544 16:11:06 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.544 16:11:06 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.544 16:11:06 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.544 16:11:06 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.544 16:11:06 -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.544 16:11:06 -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.544 16:11:06 -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.544 16:11:06 -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.544 16:11:06 -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.544 16:11:06 -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.544 16:11:06 -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.544 16:11:06 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.544 16:11:06 -- scripts/common.sh@344 -- # case "$op" in 00:05:40.544 16:11:06 -- scripts/common.sh@345 -- # : 1 00:05:40.544 16:11:06 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.544 16:11:06 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.544 16:11:06 -- scripts/common.sh@365 -- # decimal 1 00:05:40.544 16:11:06 -- scripts/common.sh@353 -- # local d=1 00:05:40.544 16:11:06 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.544 16:11:06 -- scripts/common.sh@355 -- # echo 1 00:05:40.544 16:11:06 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.544 16:11:06 -- scripts/common.sh@366 -- # decimal 2 00:05:40.544 16:11:06 -- scripts/common.sh@353 -- # local d=2 00:05:40.544 16:11:06 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.544 16:11:06 -- scripts/common.sh@355 -- # echo 2 00:05:40.544 16:11:06 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.544 16:11:06 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.544 16:11:06 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.545 16:11:06 -- scripts/common.sh@368 -- # return 0 00:05:40.545 16:11:06 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.545 16:11:06 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.545 --rc genhtml_branch_coverage=1 00:05:40.545 --rc genhtml_function_coverage=1 00:05:40.545 --rc genhtml_legend=1 00:05:40.545 --rc geninfo_all_blocks=1 00:05:40.545 --rc geninfo_unexecuted_blocks=1 00:05:40.545 00:05:40.545 ' 00:05:40.545 16:11:06 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.545 --rc genhtml_branch_coverage=1 00:05:40.545 --rc genhtml_function_coverage=1 00:05:40.545 --rc genhtml_legend=1 00:05:40.545 --rc geninfo_all_blocks=1 00:05:40.545 --rc geninfo_unexecuted_blocks=1 00:05:40.545 00:05:40.545 ' 00:05:40.545 16:11:06 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.545 --rc genhtml_branch_coverage=1 00:05:40.545 --rc genhtml_function_coverage=1 00:05:40.545 --rc genhtml_legend=1 00:05:40.545 --rc geninfo_all_blocks=1 00:05:40.545 --rc geninfo_unexecuted_blocks=1 00:05:40.545 00:05:40.545 ' 00:05:40.545 16:11:06 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.545 --rc genhtml_branch_coverage=1 00:05:40.545 --rc genhtml_function_coverage=1 00:05:40.545 --rc genhtml_legend=1 00:05:40.545 --rc geninfo_all_blocks=1 00:05:40.545 --rc geninfo_unexecuted_blocks=1 00:05:40.545 00:05:40.545 ' 00:05:40.545 16:11:06 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:40.545 16:11:06 -- nvmf/common.sh@7 -- # uname -s 00:05:40.545 16:11:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:40.545 16:11:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:40.545 16:11:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:40.545 16:11:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:40.545 16:11:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:40.545 16:11:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:40.545 16:11:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:40.545 16:11:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:40.545 16:11:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:40.545 16:11:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:40.804 16:11:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:05:40.804 16:11:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:05:40.804 16:11:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:40.804 16:11:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:40.804 16:11:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:40.804 16:11:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:40.804 16:11:06 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:40.804 16:11:06 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:40.804 16:11:06 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.804 16:11:06 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.804 16:11:06 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.804 16:11:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.804 16:11:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.804 16:11:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.804 16:11:06 -- paths/export.sh@5 -- # export PATH 00:05:40.804 16:11:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.804 16:11:06 -- nvmf/common.sh@51 -- # : 0 00:05:40.804 16:11:06 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:40.804 16:11:06 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:40.804 16:11:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:40.804 16:11:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.804 16:11:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.804 16:11:06 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:40.804 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:40.804 16:11:06 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:40.804 16:11:06 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:40.804 16:11:06 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:40.804 16:11:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:40.804 16:11:06 -- spdk/autotest.sh@32 -- # uname -s 00:05:40.804 16:11:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:40.804 16:11:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:40.804 16:11:06 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:40.804 16:11:06 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:40.804 16:11:06 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:40.804 16:11:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:40.804 16:11:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:40.804 16:11:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:40.804 16:11:06 -- spdk/autotest.sh@48 -- # udevadm_pid=66641 00:05:40.804 16:11:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:40.804 16:11:06 -- pm/common@17 -- # local monitor 00:05:40.804 16:11:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:40.804 16:11:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:40.804 16:11:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:40.804 16:11:06 -- pm/common@25 -- # sleep 1 00:05:40.804 16:11:06 -- pm/common@21 -- # date +%s 00:05:40.804 16:11:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732637466 00:05:40.805 16:11:06 -- pm/common@21 -- # date +%s 00:05:40.805 16:11:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732637466 00:05:40.805 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732637466_collect-vmstat.pm.log 00:05:40.805 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732637466_collect-cpu-load.pm.log 00:05:41.741 16:11:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:41.741 16:11:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:41.741 16:11:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.741 16:11:07 -- common/autotest_common.sh@10 -- # set +x 00:05:41.741 16:11:07 -- spdk/autotest.sh@59 -- # create_test_list 00:05:41.742 16:11:07 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:41.742 16:11:07 -- common/autotest_common.sh@10 -- # set +x 00:05:41.742 16:11:07 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:41.742 16:11:07 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:41.742 16:11:07 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:41.742 16:11:07 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:41.742 16:11:07 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:41.742 16:11:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:41.742 16:11:07 -- common/autotest_common.sh@1457 -- # uname 00:05:41.742 16:11:07 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:41.742 16:11:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:41.742 16:11:07 -- common/autotest_common.sh@1477 -- # uname 00:05:41.742 16:11:07 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:41.742 16:11:07 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:41.742 16:11:07 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:42.001 lcov: LCOV version 1.15 00:05:42.001 16:11:07 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:56.890 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:56.890 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:11.772 16:11:36 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:11.772 16:11:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:11.772 16:11:36 -- common/autotest_common.sh@10 -- # set +x 00:06:11.772 16:11:36 -- spdk/autotest.sh@78 -- # rm -f 00:06:11.772 16:11:36 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:11.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:11.773 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:11.773 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:11.773 16:11:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:11.773 16:11:36 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:11.773 16:11:36 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:11.773 16:11:36 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:11.773 16:11:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:11.773 16:11:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:11.773 16:11:36 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:11.773 16:11:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:11.773 16:11:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:11.773 16:11:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:11.773 16:11:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:11.773 16:11:36 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:11.773 16:11:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:11.773 16:11:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:11.773 16:11:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:11.773 16:11:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:06:11.773 16:11:36 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:11.773 16:11:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:11.773 16:11:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:11.773 16:11:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:11.773 16:11:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:06:11.773 16:11:36 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:11.773 16:11:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:11.773 16:11:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:11.773 16:11:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:11.773 16:11:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:11.773 16:11:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:11.773 16:11:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:11.773 16:11:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:11.773 16:11:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:11.773 No valid GPT data, bailing 00:06:11.773 16:11:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:11.773 16:11:36 -- scripts/common.sh@394 -- # pt= 00:06:11.773 16:11:36 -- scripts/common.sh@395 -- # return 1 00:06:11.773 16:11:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:11.773 1+0 records in 00:06:11.773 1+0 records out 00:06:11.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00452637 s, 232 MB/s 00:06:11.773 16:11:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:11.773 16:11:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:11.773 16:11:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:11.773 16:11:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:11.773 16:11:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:11.773 No valid GPT data, bailing 00:06:11.773 16:11:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:11.773 16:11:36 -- scripts/common.sh@394 -- # pt= 00:06:11.773 16:11:36 -- scripts/common.sh@395 -- # return 1 00:06:11.773 16:11:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:11.773 1+0 records in 00:06:11.773 1+0 records out 00:06:11.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00306347 s, 342 MB/s 00:06:11.773 16:11:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:11.773 16:11:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:11.773 16:11:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:11.773 16:11:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:11.773 16:11:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:11.773 No valid GPT data, bailing 00:06:11.773 16:11:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:11.773 16:11:36 -- scripts/common.sh@394 -- # pt= 00:06:11.773 16:11:36 -- scripts/common.sh@395 -- # return 1 00:06:11.773 16:11:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:11.773 1+0 records in 00:06:11.773 1+0 records out 00:06:11.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00356632 s, 294 MB/s 00:06:11.773 16:11:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:11.773 16:11:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:11.773 16:11:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:11.773 16:11:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:11.773 16:11:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:11.773 No valid GPT data, bailing 00:06:11.773 16:11:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:11.773 16:11:37 -- scripts/common.sh@394 -- # pt= 00:06:11.773 16:11:37 -- scripts/common.sh@395 -- # return 1 00:06:11.773 16:11:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:11.773 1+0 records in 00:06:11.773 1+0 records out 00:06:11.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0040593 s, 258 MB/s 00:06:11.773 16:11:37 -- spdk/autotest.sh@105 -- # sync 00:06:11.773 16:11:37 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:11.773 16:11:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:11.773 16:11:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:13.696 16:11:39 -- spdk/autotest.sh@111 -- # uname -s 00:06:13.955 16:11:39 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:13.955 16:11:39 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:13.955 16:11:39 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:14.523 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:14.523 Hugepages 00:06:14.523 node hugesize free / total 00:06:14.523 node0 1048576kB 0 / 0 00:06:14.523 node0 2048kB 0 / 0 00:06:14.523 00:06:14.523 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:14.523 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:14.523 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:14.783 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:14.783 16:11:40 -- spdk/autotest.sh@117 -- # uname -s 00:06:14.783 16:11:40 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:14.783 16:11:40 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:14.783 16:11:40 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:15.352 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:15.352 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:15.611 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:15.611 16:11:41 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:16.548 16:11:42 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:16.548 16:11:42 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:16.548 16:11:42 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:16.548 16:11:42 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:16.548 16:11:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:16.548 16:11:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:16.548 16:11:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:16.548 16:11:42 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:16.548 16:11:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:16.548 16:11:42 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:16.548 16:11:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:16.548 16:11:42 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:17.116 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:17.116 Waiting for block devices as requested 00:06:17.116 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:17.116 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:17.116 16:11:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:17.116 16:11:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:17.116 16:11:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:17.116 16:11:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:17.116 16:11:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:17.116 16:11:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:17.116 16:11:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:17.116 16:11:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:17.116 16:11:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:17.116 16:11:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:17.116 16:11:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:17.116 16:11:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:17.116 16:11:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:17.116 16:11:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:17.116 16:11:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:17.116 16:11:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:17.116 16:11:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:17.116 16:11:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:17.116 16:11:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:17.116 16:11:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:17.116 16:11:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:17.116 16:11:42 -- common/autotest_common.sh@1543 -- # continue 00:06:17.116 16:11:42 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:17.116 16:11:42 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:17.116 16:11:42 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:17.116 16:11:42 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:17.116 16:11:42 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:17.116 16:11:42 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:17.116 16:11:42 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:17.384 16:11:42 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:17.384 16:11:42 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:17.384 16:11:42 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:17.384 16:11:42 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:17.384 16:11:42 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:17.384 16:11:42 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:17.384 16:11:42 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:17.384 16:11:42 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:17.384 16:11:42 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:17.384 16:11:42 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:17.384 16:11:42 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:17.384 16:11:42 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:17.384 16:11:42 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:17.384 16:11:42 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:17.384 16:11:42 -- common/autotest_common.sh@1543 -- # continue 00:06:17.384 16:11:42 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:17.384 16:11:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.384 16:11:42 -- common/autotest_common.sh@10 -- # set +x 00:06:17.384 16:11:42 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:17.384 16:11:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.384 16:11:42 -- common/autotest_common.sh@10 -- # set +x 00:06:17.384 16:11:42 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:17.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:17.979 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:18.238 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:18.238 16:11:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:18.238 16:11:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:18.238 16:11:43 -- common/autotest_common.sh@10 -- # set +x 00:06:18.238 16:11:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:18.238 16:11:43 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:18.238 16:11:43 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:18.238 16:11:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:18.238 16:11:43 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:18.238 16:11:43 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:18.238 16:11:43 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:18.238 16:11:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:18.238 16:11:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:18.238 16:11:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:18.238 16:11:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:18.238 16:11:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:18.238 16:11:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:18.238 16:11:43 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:18.238 16:11:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:18.238 16:11:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:18.238 16:11:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:18.238 16:11:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:18.238 16:11:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:18.238 16:11:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:18.238 16:11:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:18.238 16:11:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:18.238 16:11:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:18.238 16:11:43 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:18.238 16:11:43 -- common/autotest_common.sh@1572 -- # return 0 00:06:18.238 16:11:43 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:18.238 16:11:43 -- common/autotest_common.sh@1580 -- # return 0 00:06:18.238 16:11:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:18.238 16:11:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:18.238 16:11:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:18.238 16:11:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:18.238 16:11:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:18.238 16:11:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.238 16:11:43 -- common/autotest_common.sh@10 -- # set +x 00:06:18.238 16:11:43 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:18.238 16:11:43 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:18.238 16:11:43 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:18.238 16:11:43 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:18.238 16:11:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.238 16:11:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.238 16:11:43 -- common/autotest_common.sh@10 -- # set +x 00:06:18.238 ************************************ 00:06:18.238 START TEST env 00:06:18.238 ************************************ 00:06:18.238 16:11:43 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:18.497 * Looking for test storage... 00:06:18.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:18.497 16:11:43 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.497 16:11:43 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.497 16:11:43 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.497 16:11:43 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.497 16:11:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.497 16:11:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.497 16:11:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.497 16:11:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.497 16:11:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.497 16:11:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.497 16:11:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.497 16:11:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.497 16:11:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.497 16:11:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.497 16:11:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.497 16:11:43 env -- scripts/common.sh@344 -- # case "$op" in 00:06:18.497 16:11:43 env -- scripts/common.sh@345 -- # : 1 00:06:18.497 16:11:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.497 16:11:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.497 16:11:43 env -- scripts/common.sh@365 -- # decimal 1 00:06:18.497 16:11:43 env -- scripts/common.sh@353 -- # local d=1 00:06:18.497 16:11:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.497 16:11:43 env -- scripts/common.sh@355 -- # echo 1 00:06:18.497 16:11:44 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.497 16:11:44 env -- scripts/common.sh@366 -- # decimal 2 00:06:18.497 16:11:44 env -- scripts/common.sh@353 -- # local d=2 00:06:18.497 16:11:44 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.497 16:11:44 env -- scripts/common.sh@355 -- # echo 2 00:06:18.497 16:11:44 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.497 16:11:44 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.497 16:11:44 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.497 16:11:44 env -- scripts/common.sh@368 -- # return 0 00:06:18.497 16:11:44 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.497 16:11:44 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.497 --rc genhtml_branch_coverage=1 00:06:18.497 --rc genhtml_function_coverage=1 00:06:18.497 --rc genhtml_legend=1 00:06:18.497 --rc geninfo_all_blocks=1 00:06:18.497 --rc geninfo_unexecuted_blocks=1 00:06:18.497 00:06:18.497 ' 00:06:18.497 16:11:44 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.497 --rc genhtml_branch_coverage=1 00:06:18.497 --rc genhtml_function_coverage=1 00:06:18.497 --rc genhtml_legend=1 00:06:18.497 --rc geninfo_all_blocks=1 00:06:18.497 --rc geninfo_unexecuted_blocks=1 00:06:18.497 00:06:18.497 ' 00:06:18.497 16:11:44 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.497 --rc genhtml_branch_coverage=1 00:06:18.497 --rc genhtml_function_coverage=1 00:06:18.497 --rc genhtml_legend=1 00:06:18.497 --rc geninfo_all_blocks=1 00:06:18.497 --rc geninfo_unexecuted_blocks=1 00:06:18.497 00:06:18.497 ' 00:06:18.497 16:11:44 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.497 --rc genhtml_branch_coverage=1 00:06:18.497 --rc genhtml_function_coverage=1 00:06:18.497 --rc genhtml_legend=1 00:06:18.497 --rc geninfo_all_blocks=1 00:06:18.497 --rc geninfo_unexecuted_blocks=1 00:06:18.497 00:06:18.497 ' 00:06:18.497 16:11:44 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:18.497 16:11:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.497 16:11:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.497 16:11:44 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.497 ************************************ 00:06:18.497 START TEST env_memory 00:06:18.497 ************************************ 00:06:18.497 16:11:44 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:18.497 00:06:18.497 00:06:18.497 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.497 http://cunit.sourceforge.net/ 00:06:18.497 00:06:18.497 00:06:18.497 Suite: memory 00:06:18.497 Test: alloc and free memory map ...[2024-11-26 16:11:44.064099] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:18.497 passed 00:06:18.497 Test: mem map translation ...[2024-11-26 16:11:44.094829] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:18.497 [2024-11-26 16:11:44.094866] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:18.497 [2024-11-26 16:11:44.094921] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:18.497 [2024-11-26 16:11:44.094932] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:18.756 passed 00:06:18.756 Test: mem map registration ...[2024-11-26 16:11:44.158771] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:18.756 [2024-11-26 16:11:44.158814] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:18.756 passed 00:06:18.756 Test: mem map adjacent registrations ...passed 00:06:18.756 00:06:18.756 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.756 suites 1 1 n/a 0 0 00:06:18.756 tests 4 4 4 0 0 00:06:18.756 asserts 152 152 152 0 n/a 00:06:18.756 00:06:18.756 Elapsed time = 0.212 seconds 00:06:18.756 00:06:18.756 real 0m0.228s 00:06:18.756 user 0m0.213s 00:06:18.756 sys 0m0.012s 00:06:18.756 16:11:44 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.756 ************************************ 00:06:18.756 END TEST env_memory 00:06:18.756 16:11:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:18.756 ************************************ 00:06:18.756 16:11:44 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:18.756 16:11:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.756 16:11:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.756 16:11:44 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.756 ************************************ 00:06:18.756 START TEST env_vtophys 00:06:18.756 ************************************ 00:06:18.756 16:11:44 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:18.756 EAL: lib.eal log level changed from notice to debug 00:06:18.756 EAL: Detected lcore 0 as core 0 on socket 0 00:06:18.756 EAL: Detected lcore 1 as core 0 on socket 0 00:06:18.756 EAL: Detected lcore 2 as core 0 on socket 0 00:06:18.756 EAL: Detected lcore 3 as core 0 on socket 0 00:06:18.756 EAL: Detected lcore 4 as core 0 on socket 0 00:06:18.756 EAL: Detected lcore 5 as core 0 on socket 0 00:06:18.756 EAL: Detected lcore 6 as core 0 on socket 0 00:06:18.756 EAL: Detected lcore 7 as core 0 on socket 0 00:06:18.756 EAL: Detected lcore 8 as core 0 on socket 0 00:06:18.756 EAL: Detected lcore 9 as core 0 on socket 0 00:06:18.756 EAL: Maximum logical cores by configuration: 128 00:06:18.756 EAL: Detected CPU lcores: 10 00:06:18.756 EAL: Detected NUMA nodes: 1 00:06:18.756 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:18.756 EAL: Detected shared linkage of DPDK 00:06:18.756 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:18.756 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:18.756 EAL: Registered [vdev] bus. 00:06:18.756 EAL: bus.vdev log level changed from disabled to notice 00:06:18.756 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:18.756 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:18.756 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:18.756 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:18.756 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:18.756 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:18.756 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:18.756 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:18.756 EAL: No shared files mode enabled, IPC will be disabled 00:06:18.756 EAL: No shared files mode enabled, IPC is disabled 00:06:18.756 EAL: Selected IOVA mode 'PA' 00:06:18.756 EAL: Probing VFIO support... 00:06:18.756 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:18.756 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:18.756 EAL: Ask a virtual area of 0x2e000 bytes 00:06:18.756 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:18.756 EAL: Setting up physically contiguous memory... 00:06:18.756 EAL: Setting maximum number of open files to 524288 00:06:18.756 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:18.756 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:18.756 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.756 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:18.756 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.756 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.756 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:18.756 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:18.756 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.756 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:18.756 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.756 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.756 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:18.756 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:18.756 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.756 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:18.756 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.756 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.756 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:18.756 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:18.756 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.756 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:18.756 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.756 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.756 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:18.756 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:18.756 EAL: Hugepages will be freed exactly as allocated. 00:06:18.756 EAL: No shared files mode enabled, IPC is disabled 00:06:18.756 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: TSC frequency is ~2200000 KHz 00:06:19.016 EAL: Main lcore 0 is ready (tid=7f8cb0b1aa00;cpuset=[0]) 00:06:19.016 EAL: Trying to obtain current memory policy. 00:06:19.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.016 EAL: Restoring previous memory policy: 0 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was expanded by 2MB 00:06:19.016 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:19.016 EAL: Mem event callback 'spdk:(nil)' registered 00:06:19.016 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:19.016 00:06:19.016 00:06:19.016 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.016 http://cunit.sourceforge.net/ 00:06:19.016 00:06:19.016 00:06:19.016 Suite: components_suite 00:06:19.016 Test: vtophys_malloc_test ...passed 00:06:19.016 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:19.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.016 EAL: Restoring previous memory policy: 4 00:06:19.016 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was expanded by 4MB 00:06:19.016 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was shrunk by 4MB 00:06:19.016 EAL: Trying to obtain current memory policy. 00:06:19.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.016 EAL: Restoring previous memory policy: 4 00:06:19.016 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was expanded by 6MB 00:06:19.016 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was shrunk by 6MB 00:06:19.016 EAL: Trying to obtain current memory policy. 00:06:19.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.016 EAL: Restoring previous memory policy: 4 00:06:19.016 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was expanded by 10MB 00:06:19.016 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was shrunk by 10MB 00:06:19.016 EAL: Trying to obtain current memory policy. 00:06:19.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.016 EAL: Restoring previous memory policy: 4 00:06:19.016 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was expanded by 18MB 00:06:19.016 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was shrunk by 18MB 00:06:19.016 EAL: Trying to obtain current memory policy. 00:06:19.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.016 EAL: Restoring previous memory policy: 4 00:06:19.016 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was expanded by 34MB 00:06:19.016 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was shrunk by 34MB 00:06:19.016 EAL: Trying to obtain current memory policy. 00:06:19.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.016 EAL: Restoring previous memory policy: 4 00:06:19.016 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was expanded by 66MB 00:06:19.016 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was shrunk by 66MB 00:06:19.016 EAL: Trying to obtain current memory policy. 00:06:19.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.016 EAL: Restoring previous memory policy: 4 00:06:19.016 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was expanded by 130MB 00:06:19.016 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was shrunk by 130MB 00:06:19.016 EAL: Trying to obtain current memory policy. 00:06:19.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.016 EAL: Restoring previous memory policy: 4 00:06:19.016 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was expanded by 258MB 00:06:19.016 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.016 EAL: request: mp_malloc_sync 00:06:19.016 EAL: No shared files mode enabled, IPC is disabled 00:06:19.016 EAL: Heap on socket 0 was shrunk by 258MB 00:06:19.016 EAL: Trying to obtain current memory policy. 00:06:19.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.275 EAL: Restoring previous memory policy: 4 00:06:19.275 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.275 EAL: request: mp_malloc_sync 00:06:19.275 EAL: No shared files mode enabled, IPC is disabled 00:06:19.275 EAL: Heap on socket 0 was expanded by 514MB 00:06:19.275 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.275 EAL: request: mp_malloc_sync 00:06:19.275 EAL: No shared files mode enabled, IPC is disabled 00:06:19.275 EAL: Heap on socket 0 was shrunk by 514MB 00:06:19.275 EAL: Trying to obtain current memory policy. 00:06:19.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.533 EAL: Restoring previous memory policy: 4 00:06:19.533 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.533 EAL: request: mp_malloc_sync 00:06:19.533 EAL: No shared files mode enabled, IPC is disabled 00:06:19.533 EAL: Heap on socket 0 was expanded by 1026MB 00:06:19.533 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.533 passed 00:06:19.533 00:06:19.533 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.533 suites 1 1 n/a 0 0 00:06:19.533 tests 2 2 2 0 0 00:06:19.533 asserts 5918 5918 5918 0 n/a 00:06:19.533 00:06:19.533 Elapsed time = 0.631 seconds 00:06:19.533 EAL: request: mp_malloc_sync 00:06:19.533 EAL: No shared files mode enabled, IPC is disabled 00:06:19.533 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:19.533 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.533 EAL: request: mp_malloc_sync 00:06:19.533 EAL: No shared files mode enabled, IPC is disabled 00:06:19.533 EAL: Heap on socket 0 was shrunk by 2MB 00:06:19.533 EAL: No shared files mode enabled, IPC is disabled 00:06:19.533 EAL: No shared files mode enabled, IPC is disabled 00:06:19.533 EAL: No shared files mode enabled, IPC is disabled 00:06:19.533 00:06:19.533 real 0m0.843s 00:06:19.533 user 0m0.415s 00:06:19.533 sys 0m0.288s 00:06:19.533 16:11:45 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.533 16:11:45 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:19.533 ************************************ 00:06:19.533 END TEST env_vtophys 00:06:19.533 ************************************ 00:06:19.791 16:11:45 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:19.791 16:11:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.791 16:11:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.791 16:11:45 env -- common/autotest_common.sh@10 -- # set +x 00:06:19.791 ************************************ 00:06:19.791 START TEST env_pci 00:06:19.791 ************************************ 00:06:19.791 16:11:45 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:19.791 00:06:19.791 00:06:19.791 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.791 http://cunit.sourceforge.net/ 00:06:19.791 00:06:19.791 00:06:19.791 Suite: pci 00:06:19.791 Test: pci_hook ...[2024-11-26 16:11:45.199621] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68852 has claimed it 00:06:19.791 passed 00:06:19.791 00:06:19.791 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.791 suites 1 1 n/a 0 0 00:06:19.791 tests 1 1 1 0 0 00:06:19.791 asserts 25 25 25 0 n/a 00:06:19.791 00:06:19.791 Elapsed time = 0.002 secondsEAL: Cannot find device (10000:00:01.0) 00:06:19.791 EAL: Failed to attach device on primary process 00:06:19.791 00:06:19.791 00:06:19.791 real 0m0.016s 00:06:19.791 user 0m0.004s 00:06:19.791 sys 0m0.011s 00:06:19.791 16:11:45 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.791 16:11:45 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:19.791 ************************************ 00:06:19.791 END TEST env_pci 00:06:19.791 ************************************ 00:06:19.791 16:11:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:19.791 16:11:45 env -- env/env.sh@15 -- # uname 00:06:19.791 16:11:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:19.791 16:11:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:19.791 16:11:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:19.791 16:11:45 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:19.791 16:11:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.791 16:11:45 env -- common/autotest_common.sh@10 -- # set +x 00:06:19.791 ************************************ 00:06:19.791 START TEST env_dpdk_post_init 00:06:19.791 ************************************ 00:06:19.791 16:11:45 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:19.791 EAL: Detected CPU lcores: 10 00:06:19.791 EAL: Detected NUMA nodes: 1 00:06:19.791 EAL: Detected shared linkage of DPDK 00:06:19.791 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:19.791 EAL: Selected IOVA mode 'PA' 00:06:19.791 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:19.791 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:20.051 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:20.051 Starting DPDK initialization... 00:06:20.051 Starting SPDK post initialization... 00:06:20.051 SPDK NVMe probe 00:06:20.051 Attaching to 0000:00:10.0 00:06:20.051 Attaching to 0000:00:11.0 00:06:20.051 Attached to 0000:00:10.0 00:06:20.051 Attached to 0000:00:11.0 00:06:20.051 Cleaning up... 00:06:20.051 00:06:20.051 real 0m0.193s 00:06:20.051 user 0m0.064s 00:06:20.051 sys 0m0.028s 00:06:20.051 16:11:45 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.051 16:11:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:20.051 ************************************ 00:06:20.051 END TEST env_dpdk_post_init 00:06:20.051 ************************************ 00:06:20.051 16:11:45 env -- env/env.sh@26 -- # uname 00:06:20.051 16:11:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:20.051 16:11:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:20.051 16:11:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.051 16:11:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.051 16:11:45 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.051 ************************************ 00:06:20.051 START TEST env_mem_callbacks 00:06:20.051 ************************************ 00:06:20.051 16:11:45 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:20.051 EAL: Detected CPU lcores: 10 00:06:20.051 EAL: Detected NUMA nodes: 1 00:06:20.051 EAL: Detected shared linkage of DPDK 00:06:20.051 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:20.051 EAL: Selected IOVA mode 'PA' 00:06:20.051 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:20.051 00:06:20.051 00:06:20.051 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.051 http://cunit.sourceforge.net/ 00:06:20.051 00:06:20.051 00:06:20.051 Suite: memory 00:06:20.051 Test: test ... 00:06:20.051 register 0x200000200000 2097152 00:06:20.051 malloc 3145728 00:06:20.051 register 0x200000400000 4194304 00:06:20.051 buf 0x200000500000 len 3145728 PASSED 00:06:20.051 malloc 64 00:06:20.051 buf 0x2000004fff40 len 64 PASSED 00:06:20.051 malloc 4194304 00:06:20.051 register 0x200000800000 6291456 00:06:20.051 buf 0x200000a00000 len 4194304 PASSED 00:06:20.051 free 0x200000500000 3145728 00:06:20.051 free 0x2000004fff40 64 00:06:20.051 unregister 0x200000400000 4194304 PASSED 00:06:20.051 free 0x200000a00000 4194304 00:06:20.051 unregister 0x200000800000 6291456 PASSED 00:06:20.051 malloc 8388608 00:06:20.051 register 0x200000400000 10485760 00:06:20.051 buf 0x200000600000 len 8388608 PASSED 00:06:20.051 free 0x200000600000 8388608 00:06:20.051 unregister 0x200000400000 10485760 PASSED 00:06:20.051 passed 00:06:20.051 00:06:20.051 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.051 suites 1 1 n/a 0 0 00:06:20.051 tests 1 1 1 0 0 00:06:20.051 asserts 15 15 15 0 n/a 00:06:20.051 00:06:20.051 Elapsed time = 0.008 seconds 00:06:20.051 00:06:20.051 real 0m0.140s 00:06:20.051 user 0m0.017s 00:06:20.051 sys 0m0.022s 00:06:20.051 16:11:45 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.051 ************************************ 00:06:20.051 16:11:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:20.051 END TEST env_mem_callbacks 00:06:20.051 ************************************ 00:06:20.051 00:06:20.051 real 0m1.874s 00:06:20.051 user 0m0.919s 00:06:20.051 sys 0m0.591s 00:06:20.051 16:11:45 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.051 16:11:45 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.051 ************************************ 00:06:20.051 END TEST env 00:06:20.051 ************************************ 00:06:20.311 16:11:45 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:20.311 16:11:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.311 16:11:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.311 16:11:45 -- common/autotest_common.sh@10 -- # set +x 00:06:20.311 ************************************ 00:06:20.311 START TEST rpc 00:06:20.311 ************************************ 00:06:20.311 16:11:45 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:20.311 * Looking for test storage... 00:06:20.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:20.311 16:11:45 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.311 16:11:45 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.311 16:11:45 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.311 16:11:45 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.311 16:11:45 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.311 16:11:45 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.311 16:11:45 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.311 16:11:45 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.311 16:11:45 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.311 16:11:45 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.311 16:11:45 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.311 16:11:45 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.311 16:11:45 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.311 16:11:45 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.311 16:11:45 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.311 16:11:45 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:20.311 16:11:45 rpc -- scripts/common.sh@345 -- # : 1 00:06:20.311 16:11:45 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.311 16:11:45 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.311 16:11:45 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:20.311 16:11:45 rpc -- scripts/common.sh@353 -- # local d=1 00:06:20.311 16:11:45 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.311 16:11:45 rpc -- scripts/common.sh@355 -- # echo 1 00:06:20.311 16:11:45 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.311 16:11:45 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:20.311 16:11:45 rpc -- scripts/common.sh@353 -- # local d=2 00:06:20.311 16:11:45 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.311 16:11:45 rpc -- scripts/common.sh@355 -- # echo 2 00:06:20.311 16:11:45 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.311 16:11:45 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.311 16:11:45 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.311 16:11:45 rpc -- scripts/common.sh@368 -- # return 0 00:06:20.311 16:11:45 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.311 16:11:45 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.311 --rc genhtml_branch_coverage=1 00:06:20.311 --rc genhtml_function_coverage=1 00:06:20.311 --rc genhtml_legend=1 00:06:20.311 --rc geninfo_all_blocks=1 00:06:20.311 --rc geninfo_unexecuted_blocks=1 00:06:20.311 00:06:20.311 ' 00:06:20.311 16:11:45 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.311 --rc genhtml_branch_coverage=1 00:06:20.311 --rc genhtml_function_coverage=1 00:06:20.311 --rc genhtml_legend=1 00:06:20.311 --rc geninfo_all_blocks=1 00:06:20.311 --rc geninfo_unexecuted_blocks=1 00:06:20.311 00:06:20.311 ' 00:06:20.311 16:11:45 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.311 --rc genhtml_branch_coverage=1 00:06:20.311 --rc genhtml_function_coverage=1 00:06:20.311 --rc genhtml_legend=1 00:06:20.311 --rc geninfo_all_blocks=1 00:06:20.311 --rc geninfo_unexecuted_blocks=1 00:06:20.311 00:06:20.311 ' 00:06:20.311 16:11:45 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.311 --rc genhtml_branch_coverage=1 00:06:20.311 --rc genhtml_function_coverage=1 00:06:20.311 --rc genhtml_legend=1 00:06:20.311 --rc geninfo_all_blocks=1 00:06:20.311 --rc geninfo_unexecuted_blocks=1 00:06:20.311 00:06:20.311 ' 00:06:20.311 16:11:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68969 00:06:20.311 16:11:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.311 16:11:45 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:20.311 16:11:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68969 00:06:20.311 16:11:45 rpc -- common/autotest_common.sh@835 -- # '[' -z 68969 ']' 00:06:20.311 16:11:45 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.311 16:11:45 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.311 16:11:45 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.311 16:11:45 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.311 16:11:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.571 [2024-11-26 16:11:46.003623] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:06:20.571 [2024-11-26 16:11:46.003736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68969 ] 00:06:20.571 [2024-11-26 16:11:46.155942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.571 [2024-11-26 16:11:46.179985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:20.571 [2024-11-26 16:11:46.180047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68969' to capture a snapshot of events at runtime. 00:06:20.571 [2024-11-26 16:11:46.180061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:20.571 [2024-11-26 16:11:46.180071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:20.571 [2024-11-26 16:11:46.180079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68969 for offline analysis/debug. 00:06:20.571 [2024-11-26 16:11:46.180489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.831 [2024-11-26 16:11:46.224357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.831 16:11:46 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.831 16:11:46 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:20.831 16:11:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:20.831 16:11:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:20.831 16:11:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:20.831 16:11:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:20.831 16:11:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.831 16:11:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.831 16:11:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.831 ************************************ 00:06:20.831 START TEST rpc_integrity 00:06:20.831 ************************************ 00:06:20.831 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:20.831 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:20.831 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.831 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.831 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.831 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:20.831 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:20.831 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:20.831 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:20.831 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.831 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.831 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.831 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:20.831 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:20.831 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.831 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.831 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.831 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:20.831 { 00:06:20.831 "name": "Malloc0", 00:06:20.831 "aliases": [ 00:06:20.831 "6514398b-3d15-4bac-b9e5-37b5875f37aa" 00:06:20.831 ], 00:06:20.831 "product_name": "Malloc disk", 00:06:20.831 "block_size": 512, 00:06:20.831 "num_blocks": 16384, 00:06:20.831 "uuid": "6514398b-3d15-4bac-b9e5-37b5875f37aa", 00:06:20.831 "assigned_rate_limits": { 00:06:20.831 "rw_ios_per_sec": 0, 00:06:20.831 "rw_mbytes_per_sec": 0, 00:06:20.831 "r_mbytes_per_sec": 0, 00:06:20.831 "w_mbytes_per_sec": 0 00:06:20.831 }, 00:06:20.831 "claimed": false, 00:06:20.831 "zoned": false, 00:06:20.831 "supported_io_types": { 00:06:20.831 "read": true, 00:06:20.831 "write": true, 00:06:20.831 "unmap": true, 00:06:20.831 "flush": true, 00:06:20.831 "reset": true, 00:06:20.831 "nvme_admin": false, 00:06:20.831 "nvme_io": false, 00:06:20.831 "nvme_io_md": false, 00:06:20.831 "write_zeroes": true, 00:06:20.831 "zcopy": true, 00:06:20.831 "get_zone_info": false, 00:06:20.831 "zone_management": false, 00:06:20.831 "zone_append": false, 00:06:20.831 "compare": false, 00:06:20.831 "compare_and_write": false, 00:06:20.831 "abort": true, 00:06:20.831 "seek_hole": false, 00:06:20.831 "seek_data": false, 00:06:20.831 "copy": true, 00:06:20.831 "nvme_iov_md": false 00:06:20.831 }, 00:06:20.831 "memory_domains": [ 00:06:20.831 { 00:06:20.831 "dma_device_id": "system", 00:06:20.831 "dma_device_type": 1 00:06:20.831 }, 00:06:20.831 { 00:06:20.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.831 "dma_device_type": 2 00:06:20.831 } 00:06:20.831 ], 00:06:20.831 "driver_specific": {} 00:06:20.831 } 00:06:20.831 ]' 00:06:20.831 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:21.091 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:21.091 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.091 [2024-11-26 16:11:46.526214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:21.091 [2024-11-26 16:11:46.526292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:21.091 [2024-11-26 16:11:46.526309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d91200 00:06:21.091 [2024-11-26 16:11:46.526318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:21.091 [2024-11-26 16:11:46.527947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:21.091 [2024-11-26 16:11:46.528010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:21.091 Passthru0 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.091 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.091 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:21.091 { 00:06:21.091 "name": "Malloc0", 00:06:21.091 "aliases": [ 00:06:21.091 "6514398b-3d15-4bac-b9e5-37b5875f37aa" 00:06:21.091 ], 00:06:21.091 "product_name": "Malloc disk", 00:06:21.091 "block_size": 512, 00:06:21.091 "num_blocks": 16384, 00:06:21.091 "uuid": "6514398b-3d15-4bac-b9e5-37b5875f37aa", 00:06:21.091 "assigned_rate_limits": { 00:06:21.091 "rw_ios_per_sec": 0, 00:06:21.091 "rw_mbytes_per_sec": 0, 00:06:21.091 "r_mbytes_per_sec": 0, 00:06:21.091 "w_mbytes_per_sec": 0 00:06:21.091 }, 00:06:21.091 "claimed": true, 00:06:21.091 "claim_type": "exclusive_write", 00:06:21.091 "zoned": false, 00:06:21.091 "supported_io_types": { 00:06:21.091 "read": true, 00:06:21.091 "write": true, 00:06:21.091 "unmap": true, 00:06:21.091 "flush": true, 00:06:21.091 "reset": true, 00:06:21.091 "nvme_admin": false, 00:06:21.091 "nvme_io": false, 00:06:21.091 "nvme_io_md": false, 00:06:21.091 "write_zeroes": true, 00:06:21.091 "zcopy": true, 00:06:21.091 "get_zone_info": false, 00:06:21.091 "zone_management": false, 00:06:21.091 "zone_append": false, 00:06:21.091 "compare": false, 00:06:21.091 "compare_and_write": false, 00:06:21.091 "abort": true, 00:06:21.091 "seek_hole": false, 00:06:21.091 "seek_data": false, 00:06:21.091 "copy": true, 00:06:21.091 "nvme_iov_md": false 00:06:21.091 }, 00:06:21.091 "memory_domains": [ 00:06:21.091 { 00:06:21.091 "dma_device_id": "system", 00:06:21.091 "dma_device_type": 1 00:06:21.091 }, 00:06:21.091 { 00:06:21.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.091 "dma_device_type": 2 00:06:21.091 } 00:06:21.091 ], 00:06:21.091 "driver_specific": {} 00:06:21.091 }, 00:06:21.091 { 00:06:21.091 "name": "Passthru0", 00:06:21.091 "aliases": [ 00:06:21.091 "5e2f2d6c-77e7-5a4f-af7e-6f9c3f69443b" 00:06:21.091 ], 00:06:21.091 "product_name": "passthru", 00:06:21.091 "block_size": 512, 00:06:21.091 "num_blocks": 16384, 00:06:21.091 "uuid": "5e2f2d6c-77e7-5a4f-af7e-6f9c3f69443b", 00:06:21.091 "assigned_rate_limits": { 00:06:21.091 "rw_ios_per_sec": 0, 00:06:21.091 "rw_mbytes_per_sec": 0, 00:06:21.091 "r_mbytes_per_sec": 0, 00:06:21.091 "w_mbytes_per_sec": 0 00:06:21.091 }, 00:06:21.091 "claimed": false, 00:06:21.091 "zoned": false, 00:06:21.091 "supported_io_types": { 00:06:21.091 "read": true, 00:06:21.091 "write": true, 00:06:21.091 "unmap": true, 00:06:21.091 "flush": true, 00:06:21.091 "reset": true, 00:06:21.091 "nvme_admin": false, 00:06:21.091 "nvme_io": false, 00:06:21.091 "nvme_io_md": false, 00:06:21.091 "write_zeroes": true, 00:06:21.091 "zcopy": true, 00:06:21.091 "get_zone_info": false, 00:06:21.091 "zone_management": false, 00:06:21.091 "zone_append": false, 00:06:21.091 "compare": false, 00:06:21.091 "compare_and_write": false, 00:06:21.091 "abort": true, 00:06:21.091 "seek_hole": false, 00:06:21.091 "seek_data": false, 00:06:21.091 "copy": true, 00:06:21.091 "nvme_iov_md": false 00:06:21.091 }, 00:06:21.091 "memory_domains": [ 00:06:21.091 { 00:06:21.091 "dma_device_id": "system", 00:06:21.091 "dma_device_type": 1 00:06:21.091 }, 00:06:21.091 { 00:06:21.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.091 "dma_device_type": 2 00:06:21.091 } 00:06:21.091 ], 00:06:21.091 "driver_specific": { 00:06:21.091 "passthru": { 00:06:21.091 "name": "Passthru0", 00:06:21.091 "base_bdev_name": "Malloc0" 00:06:21.091 } 00:06:21.091 } 00:06:21.091 } 00:06:21.091 ]' 00:06:21.091 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:21.091 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:21.091 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.091 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.091 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.091 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:21.091 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:21.091 16:11:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:21.091 00:06:21.091 real 0m0.326s 00:06:21.091 user 0m0.211s 00:06:21.091 sys 0m0.045s 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.091 ************************************ 00:06:21.091 16:11:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.091 END TEST rpc_integrity 00:06:21.091 ************************************ 00:06:21.091 16:11:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:21.091 16:11:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.091 16:11:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.091 16:11:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.351 ************************************ 00:06:21.351 START TEST rpc_plugins 00:06:21.351 ************************************ 00:06:21.351 16:11:46 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:21.351 16:11:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:21.351 16:11:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.351 16:11:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:21.351 16:11:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.351 16:11:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:21.351 16:11:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:21.351 16:11:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.351 16:11:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:21.351 16:11:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.351 16:11:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:21.351 { 00:06:21.351 "name": "Malloc1", 00:06:21.351 "aliases": [ 00:06:21.351 "14a59e34-5fe8-40ec-a5f6-332f9386a8b7" 00:06:21.351 ], 00:06:21.351 "product_name": "Malloc disk", 00:06:21.351 "block_size": 4096, 00:06:21.351 "num_blocks": 256, 00:06:21.351 "uuid": "14a59e34-5fe8-40ec-a5f6-332f9386a8b7", 00:06:21.351 "assigned_rate_limits": { 00:06:21.351 "rw_ios_per_sec": 0, 00:06:21.351 "rw_mbytes_per_sec": 0, 00:06:21.351 "r_mbytes_per_sec": 0, 00:06:21.351 "w_mbytes_per_sec": 0 00:06:21.351 }, 00:06:21.351 "claimed": false, 00:06:21.351 "zoned": false, 00:06:21.351 "supported_io_types": { 00:06:21.351 "read": true, 00:06:21.351 "write": true, 00:06:21.351 "unmap": true, 00:06:21.351 "flush": true, 00:06:21.351 "reset": true, 00:06:21.351 "nvme_admin": false, 00:06:21.351 "nvme_io": false, 00:06:21.351 "nvme_io_md": false, 00:06:21.351 "write_zeroes": true, 00:06:21.351 "zcopy": true, 00:06:21.351 "get_zone_info": false, 00:06:21.351 "zone_management": false, 00:06:21.351 "zone_append": false, 00:06:21.351 "compare": false, 00:06:21.351 "compare_and_write": false, 00:06:21.351 "abort": true, 00:06:21.351 "seek_hole": false, 00:06:21.351 "seek_data": false, 00:06:21.351 "copy": true, 00:06:21.351 "nvme_iov_md": false 00:06:21.351 }, 00:06:21.351 "memory_domains": [ 00:06:21.351 { 00:06:21.351 "dma_device_id": "system", 00:06:21.351 "dma_device_type": 1 00:06:21.352 }, 00:06:21.352 { 00:06:21.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.352 "dma_device_type": 2 00:06:21.352 } 00:06:21.352 ], 00:06:21.352 "driver_specific": {} 00:06:21.352 } 00:06:21.352 ]' 00:06:21.352 16:11:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:21.352 16:11:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:21.352 16:11:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:21.352 16:11:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.352 16:11:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:21.352 16:11:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.352 16:11:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:21.352 16:11:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.352 16:11:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:21.352 16:11:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.352 16:11:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:21.352 16:11:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:21.352 16:11:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:21.352 00:06:21.352 real 0m0.159s 00:06:21.352 user 0m0.107s 00:06:21.352 sys 0m0.015s 00:06:21.352 16:11:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.352 ************************************ 00:06:21.352 END TEST rpc_plugins 00:06:21.352 16:11:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:21.352 ************************************ 00:06:21.352 16:11:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:21.352 16:11:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.352 16:11:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.352 16:11:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.352 ************************************ 00:06:21.352 START TEST rpc_trace_cmd_test 00:06:21.352 ************************************ 00:06:21.352 16:11:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:21.352 16:11:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:21.352 16:11:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:21.352 16:11:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.352 16:11:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.352 16:11:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.352 16:11:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:21.352 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68969", 00:06:21.352 "tpoint_group_mask": "0x8", 00:06:21.352 "iscsi_conn": { 00:06:21.352 "mask": "0x2", 00:06:21.352 "tpoint_mask": "0x0" 00:06:21.352 }, 00:06:21.352 "scsi": { 00:06:21.352 "mask": "0x4", 00:06:21.352 "tpoint_mask": "0x0" 00:06:21.352 }, 00:06:21.352 "bdev": { 00:06:21.352 "mask": "0x8", 00:06:21.352 "tpoint_mask": "0xffffffffffffffff" 00:06:21.352 }, 00:06:21.352 "nvmf_rdma": { 00:06:21.352 "mask": "0x10", 00:06:21.352 "tpoint_mask": "0x0" 00:06:21.352 }, 00:06:21.352 "nvmf_tcp": { 00:06:21.352 "mask": "0x20", 00:06:21.352 "tpoint_mask": "0x0" 00:06:21.352 }, 00:06:21.352 "ftl": { 00:06:21.352 "mask": "0x40", 00:06:21.352 "tpoint_mask": "0x0" 00:06:21.352 }, 00:06:21.352 "blobfs": { 00:06:21.352 "mask": "0x80", 00:06:21.352 "tpoint_mask": "0x0" 00:06:21.352 }, 00:06:21.352 "dsa": { 00:06:21.352 "mask": "0x200", 00:06:21.352 "tpoint_mask": "0x0" 00:06:21.352 }, 00:06:21.352 "thread": { 00:06:21.352 "mask": "0x400", 00:06:21.352 "tpoint_mask": "0x0" 00:06:21.352 }, 00:06:21.352 "nvme_pcie": { 00:06:21.352 "mask": "0x800", 00:06:21.352 "tpoint_mask": "0x0" 00:06:21.352 }, 00:06:21.352 "iaa": { 00:06:21.352 "mask": "0x1000", 00:06:21.352 "tpoint_mask": "0x0" 00:06:21.352 }, 00:06:21.352 "nvme_tcp": { 00:06:21.352 "mask": "0x2000", 00:06:21.352 "tpoint_mask": "0x0" 00:06:21.352 }, 00:06:21.352 "bdev_nvme": { 00:06:21.352 "mask": "0x4000", 00:06:21.352 "tpoint_mask": "0x0" 00:06:21.352 }, 00:06:21.352 "sock": { 00:06:21.352 "mask": "0x8000", 00:06:21.352 "tpoint_mask": "0x0" 00:06:21.352 }, 00:06:21.352 "blob": { 00:06:21.352 "mask": "0x10000", 00:06:21.352 "tpoint_mask": "0x0" 00:06:21.352 }, 00:06:21.352 "bdev_raid": { 00:06:21.352 "mask": "0x20000", 00:06:21.352 "tpoint_mask": "0x0" 00:06:21.352 }, 00:06:21.352 "scheduler": { 00:06:21.352 "mask": "0x40000", 00:06:21.352 "tpoint_mask": "0x0" 00:06:21.352 } 00:06:21.352 }' 00:06:21.352 16:11:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:21.612 16:11:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:21.612 16:11:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:21.612 16:11:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:21.612 16:11:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:21.612 16:11:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:21.612 16:11:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:21.612 16:11:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:21.612 16:11:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:21.612 16:11:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:21.612 00:06:21.612 real 0m0.276s 00:06:21.612 user 0m0.238s 00:06:21.612 sys 0m0.029s 00:06:21.612 16:11:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.612 ************************************ 00:06:21.612 16:11:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.612 END TEST rpc_trace_cmd_test 00:06:21.612 ************************************ 00:06:21.871 16:11:47 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:21.871 16:11:47 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:21.871 16:11:47 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:21.871 16:11:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.871 16:11:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.871 16:11:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.871 ************************************ 00:06:21.871 START TEST rpc_daemon_integrity 00:06:21.871 ************************************ 00:06:21.871 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:21.871 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:21.871 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.871 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.871 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.871 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:21.871 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:21.871 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:21.871 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:21.871 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.871 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.871 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.871 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:21.871 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:21.871 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.872 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.872 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.872 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:21.872 { 00:06:21.872 "name": "Malloc2", 00:06:21.872 "aliases": [ 00:06:21.872 "e01d8f39-cfd1-4ea5-bb99-a3dcdb98ec09" 00:06:21.872 ], 00:06:21.872 "product_name": "Malloc disk", 00:06:21.872 "block_size": 512, 00:06:21.872 "num_blocks": 16384, 00:06:21.872 "uuid": "e01d8f39-cfd1-4ea5-bb99-a3dcdb98ec09", 00:06:21.872 "assigned_rate_limits": { 00:06:21.872 "rw_ios_per_sec": 0, 00:06:21.872 "rw_mbytes_per_sec": 0, 00:06:21.872 "r_mbytes_per_sec": 0, 00:06:21.872 "w_mbytes_per_sec": 0 00:06:21.872 }, 00:06:21.872 "claimed": false, 00:06:21.872 "zoned": false, 00:06:21.872 "supported_io_types": { 00:06:21.872 "read": true, 00:06:21.872 "write": true, 00:06:21.872 "unmap": true, 00:06:21.872 "flush": true, 00:06:21.872 "reset": true, 00:06:21.872 "nvme_admin": false, 00:06:21.872 "nvme_io": false, 00:06:21.872 "nvme_io_md": false, 00:06:21.872 "write_zeroes": true, 00:06:21.872 "zcopy": true, 00:06:21.872 "get_zone_info": false, 00:06:21.872 "zone_management": false, 00:06:21.872 "zone_append": false, 00:06:21.872 "compare": false, 00:06:21.872 "compare_and_write": false, 00:06:21.872 "abort": true, 00:06:21.872 "seek_hole": false, 00:06:21.872 "seek_data": false, 00:06:21.872 "copy": true, 00:06:21.872 "nvme_iov_md": false 00:06:21.872 }, 00:06:21.872 "memory_domains": [ 00:06:21.872 { 00:06:21.872 "dma_device_id": "system", 00:06:21.872 "dma_device_type": 1 00:06:21.872 }, 00:06:21.872 { 00:06:21.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.872 "dma_device_type": 2 00:06:21.872 } 00:06:21.872 ], 00:06:21.872 "driver_specific": {} 00:06:21.872 } 00:06:21.872 ]' 00:06:21.872 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:21.872 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:21.872 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:21.872 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.872 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.872 [2024-11-26 16:11:47.438575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:21.872 [2024-11-26 16:11:47.438652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:21.872 [2024-11-26 16:11:47.438672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c4f430 00:06:21.872 [2024-11-26 16:11:47.438682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:21.872 [2024-11-26 16:11:47.440054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:21.872 [2024-11-26 16:11:47.440092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:21.872 Passthru0 00:06:21.872 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.872 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:21.872 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.872 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.872 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.872 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:21.872 { 00:06:21.872 "name": "Malloc2", 00:06:21.872 "aliases": [ 00:06:21.872 "e01d8f39-cfd1-4ea5-bb99-a3dcdb98ec09" 00:06:21.872 ], 00:06:21.872 "product_name": "Malloc disk", 00:06:21.872 "block_size": 512, 00:06:21.872 "num_blocks": 16384, 00:06:21.872 "uuid": "e01d8f39-cfd1-4ea5-bb99-a3dcdb98ec09", 00:06:21.872 "assigned_rate_limits": { 00:06:21.872 "rw_ios_per_sec": 0, 00:06:21.872 "rw_mbytes_per_sec": 0, 00:06:21.872 "r_mbytes_per_sec": 0, 00:06:21.872 "w_mbytes_per_sec": 0 00:06:21.872 }, 00:06:21.872 "claimed": true, 00:06:21.872 "claim_type": "exclusive_write", 00:06:21.872 "zoned": false, 00:06:21.872 "supported_io_types": { 00:06:21.872 "read": true, 00:06:21.872 "write": true, 00:06:21.872 "unmap": true, 00:06:21.872 "flush": true, 00:06:21.872 "reset": true, 00:06:21.872 "nvme_admin": false, 00:06:21.872 "nvme_io": false, 00:06:21.872 "nvme_io_md": false, 00:06:21.872 "write_zeroes": true, 00:06:21.872 "zcopy": true, 00:06:21.872 "get_zone_info": false, 00:06:21.872 "zone_management": false, 00:06:21.872 "zone_append": false, 00:06:21.872 "compare": false, 00:06:21.872 "compare_and_write": false, 00:06:21.872 "abort": true, 00:06:21.872 "seek_hole": false, 00:06:21.872 "seek_data": false, 00:06:21.872 "copy": true, 00:06:21.872 "nvme_iov_md": false 00:06:21.872 }, 00:06:21.872 "memory_domains": [ 00:06:21.872 { 00:06:21.872 "dma_device_id": "system", 00:06:21.872 "dma_device_type": 1 00:06:21.872 }, 00:06:21.872 { 00:06:21.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.872 "dma_device_type": 2 00:06:21.872 } 00:06:21.872 ], 00:06:21.872 "driver_specific": {} 00:06:21.872 }, 00:06:21.872 { 00:06:21.872 "name": "Passthru0", 00:06:21.872 "aliases": [ 00:06:21.872 "24992438-b413-59e8-8ee4-5288c7a2e8a9" 00:06:21.872 ], 00:06:21.872 "product_name": "passthru", 00:06:21.872 "block_size": 512, 00:06:21.872 "num_blocks": 16384, 00:06:21.872 "uuid": "24992438-b413-59e8-8ee4-5288c7a2e8a9", 00:06:21.872 "assigned_rate_limits": { 00:06:21.872 "rw_ios_per_sec": 0, 00:06:21.872 "rw_mbytes_per_sec": 0, 00:06:21.872 "r_mbytes_per_sec": 0, 00:06:21.872 "w_mbytes_per_sec": 0 00:06:21.872 }, 00:06:21.872 "claimed": false, 00:06:21.872 "zoned": false, 00:06:21.872 "supported_io_types": { 00:06:21.872 "read": true, 00:06:21.872 "write": true, 00:06:21.872 "unmap": true, 00:06:21.872 "flush": true, 00:06:21.872 "reset": true, 00:06:21.872 "nvme_admin": false, 00:06:21.872 "nvme_io": false, 00:06:21.872 "nvme_io_md": false, 00:06:21.872 "write_zeroes": true, 00:06:21.872 "zcopy": true, 00:06:21.872 "get_zone_info": false, 00:06:21.872 "zone_management": false, 00:06:21.872 "zone_append": false, 00:06:21.872 "compare": false, 00:06:21.872 "compare_and_write": false, 00:06:21.872 "abort": true, 00:06:21.872 "seek_hole": false, 00:06:21.872 "seek_data": false, 00:06:21.872 "copy": true, 00:06:21.872 "nvme_iov_md": false 00:06:21.872 }, 00:06:21.872 "memory_domains": [ 00:06:21.872 { 00:06:21.872 "dma_device_id": "system", 00:06:21.872 "dma_device_type": 1 00:06:21.872 }, 00:06:21.872 { 00:06:21.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.872 "dma_device_type": 2 00:06:21.872 } 00:06:21.872 ], 00:06:21.872 "driver_specific": { 00:06:21.872 "passthru": { 00:06:21.872 "name": "Passthru0", 00:06:21.872 "base_bdev_name": "Malloc2" 00:06:21.872 } 00:06:21.872 } 00:06:21.872 } 00:06:21.872 ]' 00:06:21.872 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:22.132 00:06:22.132 real 0m0.346s 00:06:22.132 user 0m0.227s 00:06:22.132 sys 0m0.040s 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.132 ************************************ 00:06:22.132 16:11:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.132 END TEST rpc_daemon_integrity 00:06:22.132 ************************************ 00:06:22.132 16:11:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:22.132 16:11:47 rpc -- rpc/rpc.sh@84 -- # killprocess 68969 00:06:22.132 16:11:47 rpc -- common/autotest_common.sh@954 -- # '[' -z 68969 ']' 00:06:22.132 16:11:47 rpc -- common/autotest_common.sh@958 -- # kill -0 68969 00:06:22.132 16:11:47 rpc -- common/autotest_common.sh@959 -- # uname 00:06:22.132 16:11:47 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.132 16:11:47 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68969 00:06:22.132 16:11:47 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.132 16:11:47 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.132 killing process with pid 68969 00:06:22.132 16:11:47 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68969' 00:06:22.132 16:11:47 rpc -- common/autotest_common.sh@973 -- # kill 68969 00:06:22.132 16:11:47 rpc -- common/autotest_common.sh@978 -- # wait 68969 00:06:22.392 00:06:22.392 real 0m2.174s 00:06:22.392 user 0m2.948s 00:06:22.392 sys 0m0.567s 00:06:22.392 16:11:47 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.392 16:11:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.392 ************************************ 00:06:22.392 END TEST rpc 00:06:22.392 ************************************ 00:06:22.392 16:11:47 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:22.392 16:11:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.392 16:11:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.392 16:11:47 -- common/autotest_common.sh@10 -- # set +x 00:06:22.392 ************************************ 00:06:22.392 START TEST skip_rpc 00:06:22.392 ************************************ 00:06:22.392 16:11:47 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:22.651 * Looking for test storage... 00:06:22.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:22.651 16:11:48 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.651 16:11:48 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.651 16:11:48 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.651 16:11:48 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.651 16:11:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:22.651 16:11:48 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.651 16:11:48 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.651 --rc genhtml_branch_coverage=1 00:06:22.651 --rc genhtml_function_coverage=1 00:06:22.651 --rc genhtml_legend=1 00:06:22.651 --rc geninfo_all_blocks=1 00:06:22.651 --rc geninfo_unexecuted_blocks=1 00:06:22.651 00:06:22.651 ' 00:06:22.651 16:11:48 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.651 --rc genhtml_branch_coverage=1 00:06:22.651 --rc genhtml_function_coverage=1 00:06:22.651 --rc genhtml_legend=1 00:06:22.651 --rc geninfo_all_blocks=1 00:06:22.651 --rc geninfo_unexecuted_blocks=1 00:06:22.651 00:06:22.651 ' 00:06:22.651 16:11:48 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.651 --rc genhtml_branch_coverage=1 00:06:22.651 --rc genhtml_function_coverage=1 00:06:22.652 --rc genhtml_legend=1 00:06:22.652 --rc geninfo_all_blocks=1 00:06:22.652 --rc geninfo_unexecuted_blocks=1 00:06:22.652 00:06:22.652 ' 00:06:22.652 16:11:48 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.652 --rc genhtml_branch_coverage=1 00:06:22.652 --rc genhtml_function_coverage=1 00:06:22.652 --rc genhtml_legend=1 00:06:22.652 --rc geninfo_all_blocks=1 00:06:22.652 --rc geninfo_unexecuted_blocks=1 00:06:22.652 00:06:22.652 ' 00:06:22.652 16:11:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:22.652 16:11:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:22.652 16:11:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:22.652 16:11:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.652 16:11:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.652 16:11:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.652 ************************************ 00:06:22.652 START TEST skip_rpc 00:06:22.652 ************************************ 00:06:22.652 16:11:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:22.652 16:11:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69162 00:06:22.652 16:11:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.652 16:11:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:22.652 16:11:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:22.652 [2024-11-26 16:11:48.239043] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:06:22.652 [2024-11-26 16:11:48.239147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69162 ] 00:06:22.911 [2024-11-26 16:11:48.383934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.911 [2024-11-26 16:11:48.404382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.911 [2024-11-26 16:11:48.443793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69162 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69162 ']' 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69162 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69162 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.181 killing process with pid 69162 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69162' 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69162 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69162 00:06:28.181 00:06:28.181 real 0m5.276s 00:06:28.181 user 0m5.006s 00:06:28.181 sys 0m0.186s 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.181 16:11:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.181 ************************************ 00:06:28.181 END TEST skip_rpc 00:06:28.181 ************************************ 00:06:28.181 16:11:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:28.181 16:11:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.181 16:11:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.181 16:11:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.181 ************************************ 00:06:28.181 START TEST skip_rpc_with_json 00:06:28.181 ************************************ 00:06:28.181 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:28.181 16:11:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:28.181 16:11:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69249 00:06:28.181 16:11:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.182 16:11:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.182 16:11:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69249 00:06:28.182 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69249 ']' 00:06:28.182 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.182 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.182 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.182 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.182 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:28.182 [2024-11-26 16:11:53.559853] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:06:28.182 [2024-11-26 16:11:53.559990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69249 ] 00:06:28.182 [2024-11-26 16:11:53.707872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.182 [2024-11-26 16:11:53.727714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.182 [2024-11-26 16:11:53.765037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.440 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.440 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:28.440 16:11:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:28.440 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.440 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:28.440 [2024-11-26 16:11:53.895503] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:28.440 request: 00:06:28.440 { 00:06:28.440 "trtype": "tcp", 00:06:28.440 "method": "nvmf_get_transports", 00:06:28.440 "req_id": 1 00:06:28.440 } 00:06:28.440 Got JSON-RPC error response 00:06:28.440 response: 00:06:28.440 { 00:06:28.440 "code": -19, 00:06:28.440 "message": "No such device" 00:06:28.440 } 00:06:28.440 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:28.440 16:11:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:28.440 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.440 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:28.440 [2024-11-26 16:11:53.907615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.440 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.440 16:11:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:28.440 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.440 16:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:28.700 16:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.700 16:11:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:28.700 { 00:06:28.700 "subsystems": [ 00:06:28.700 { 00:06:28.700 "subsystem": "fsdev", 00:06:28.700 "config": [ 00:06:28.700 { 00:06:28.700 "method": "fsdev_set_opts", 00:06:28.700 "params": { 00:06:28.700 "fsdev_io_pool_size": 65535, 00:06:28.700 "fsdev_io_cache_size": 256 00:06:28.700 } 00:06:28.700 } 00:06:28.700 ] 00:06:28.700 }, 00:06:28.700 { 00:06:28.700 "subsystem": "keyring", 00:06:28.700 "config": [] 00:06:28.700 }, 00:06:28.700 { 00:06:28.700 "subsystem": "iobuf", 00:06:28.700 "config": [ 00:06:28.700 { 00:06:28.700 "method": "iobuf_set_options", 00:06:28.700 "params": { 00:06:28.700 "small_pool_count": 8192, 00:06:28.700 "large_pool_count": 1024, 00:06:28.700 "small_bufsize": 8192, 00:06:28.700 "large_bufsize": 135168, 00:06:28.700 "enable_numa": false 00:06:28.700 } 00:06:28.700 } 00:06:28.700 ] 00:06:28.700 }, 00:06:28.700 { 00:06:28.700 "subsystem": "sock", 00:06:28.700 "config": [ 00:06:28.700 { 00:06:28.700 "method": "sock_set_default_impl", 00:06:28.700 "params": { 00:06:28.700 "impl_name": "uring" 00:06:28.700 } 00:06:28.700 }, 00:06:28.700 { 00:06:28.700 "method": "sock_impl_set_options", 00:06:28.700 "params": { 00:06:28.700 "impl_name": "ssl", 00:06:28.700 "recv_buf_size": 4096, 00:06:28.700 "send_buf_size": 4096, 00:06:28.700 "enable_recv_pipe": true, 00:06:28.700 "enable_quickack": false, 00:06:28.700 "enable_placement_id": 0, 00:06:28.700 "enable_zerocopy_send_server": true, 00:06:28.700 "enable_zerocopy_send_client": false, 00:06:28.700 "zerocopy_threshold": 0, 00:06:28.700 "tls_version": 0, 00:06:28.700 "enable_ktls": false 00:06:28.700 } 00:06:28.700 }, 00:06:28.700 { 00:06:28.700 "method": "sock_impl_set_options", 00:06:28.700 "params": { 00:06:28.700 "impl_name": "posix", 00:06:28.700 "recv_buf_size": 2097152, 00:06:28.700 "send_buf_size": 2097152, 00:06:28.700 "enable_recv_pipe": true, 00:06:28.700 "enable_quickack": false, 00:06:28.700 "enable_placement_id": 0, 00:06:28.700 "enable_zerocopy_send_server": true, 00:06:28.700 "enable_zerocopy_send_client": false, 00:06:28.700 "zerocopy_threshold": 0, 00:06:28.700 "tls_version": 0, 00:06:28.700 "enable_ktls": false 00:06:28.700 } 00:06:28.700 }, 00:06:28.700 { 00:06:28.700 "method": "sock_impl_set_options", 00:06:28.700 "params": { 00:06:28.700 "impl_name": "uring", 00:06:28.700 "recv_buf_size": 2097152, 00:06:28.700 "send_buf_size": 2097152, 00:06:28.700 "enable_recv_pipe": true, 00:06:28.700 "enable_quickack": false, 00:06:28.700 "enable_placement_id": 0, 00:06:28.700 "enable_zerocopy_send_server": false, 00:06:28.700 "enable_zerocopy_send_client": false, 00:06:28.700 "zerocopy_threshold": 0, 00:06:28.700 "tls_version": 0, 00:06:28.700 "enable_ktls": false 00:06:28.700 } 00:06:28.700 } 00:06:28.700 ] 00:06:28.700 }, 00:06:28.700 { 00:06:28.700 "subsystem": "vmd", 00:06:28.701 "config": [] 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "subsystem": "accel", 00:06:28.701 "config": [ 00:06:28.701 { 00:06:28.701 "method": "accel_set_options", 00:06:28.701 "params": { 00:06:28.701 "small_cache_size": 128, 00:06:28.701 "large_cache_size": 16, 00:06:28.701 "task_count": 2048, 00:06:28.701 "sequence_count": 2048, 00:06:28.701 "buf_count": 2048 00:06:28.701 } 00:06:28.701 } 00:06:28.701 ] 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "subsystem": "bdev", 00:06:28.701 "config": [ 00:06:28.701 { 00:06:28.701 "method": "bdev_set_options", 00:06:28.701 "params": { 00:06:28.701 "bdev_io_pool_size": 65535, 00:06:28.701 "bdev_io_cache_size": 256, 00:06:28.701 "bdev_auto_examine": true, 00:06:28.701 "iobuf_small_cache_size": 128, 00:06:28.701 "iobuf_large_cache_size": 16 00:06:28.701 } 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "method": "bdev_raid_set_options", 00:06:28.701 "params": { 00:06:28.701 "process_window_size_kb": 1024, 00:06:28.701 "process_max_bandwidth_mb_sec": 0 00:06:28.701 } 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "method": "bdev_iscsi_set_options", 00:06:28.701 "params": { 00:06:28.701 "timeout_sec": 30 00:06:28.701 } 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "method": "bdev_nvme_set_options", 00:06:28.701 "params": { 00:06:28.701 "action_on_timeout": "none", 00:06:28.701 "timeout_us": 0, 00:06:28.701 "timeout_admin_us": 0, 00:06:28.701 "keep_alive_timeout_ms": 10000, 00:06:28.701 "arbitration_burst": 0, 00:06:28.701 "low_priority_weight": 0, 00:06:28.701 "medium_priority_weight": 0, 00:06:28.701 "high_priority_weight": 0, 00:06:28.701 "nvme_adminq_poll_period_us": 10000, 00:06:28.701 "nvme_ioq_poll_period_us": 0, 00:06:28.701 "io_queue_requests": 0, 00:06:28.701 "delay_cmd_submit": true, 00:06:28.701 "transport_retry_count": 4, 00:06:28.701 "bdev_retry_count": 3, 00:06:28.701 "transport_ack_timeout": 0, 00:06:28.701 "ctrlr_loss_timeout_sec": 0, 00:06:28.701 "reconnect_delay_sec": 0, 00:06:28.701 "fast_io_fail_timeout_sec": 0, 00:06:28.701 "disable_auto_failback": false, 00:06:28.701 "generate_uuids": false, 00:06:28.701 "transport_tos": 0, 00:06:28.701 "nvme_error_stat": false, 00:06:28.701 "rdma_srq_size": 0, 00:06:28.701 "io_path_stat": false, 00:06:28.701 "allow_accel_sequence": false, 00:06:28.701 "rdma_max_cq_size": 0, 00:06:28.701 "rdma_cm_event_timeout_ms": 0, 00:06:28.701 "dhchap_digests": [ 00:06:28.701 "sha256", 00:06:28.701 "sha384", 00:06:28.701 "sha512" 00:06:28.701 ], 00:06:28.701 "dhchap_dhgroups": [ 00:06:28.701 "null", 00:06:28.701 "ffdhe2048", 00:06:28.701 "ffdhe3072", 00:06:28.701 "ffdhe4096", 00:06:28.701 "ffdhe6144", 00:06:28.701 "ffdhe8192" 00:06:28.701 ] 00:06:28.701 } 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "method": "bdev_nvme_set_hotplug", 00:06:28.701 "params": { 00:06:28.701 "period_us": 100000, 00:06:28.701 "enable": false 00:06:28.701 } 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "method": "bdev_wait_for_examine" 00:06:28.701 } 00:06:28.701 ] 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "subsystem": "scsi", 00:06:28.701 "config": null 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "subsystem": "scheduler", 00:06:28.701 "config": [ 00:06:28.701 { 00:06:28.701 "method": "framework_set_scheduler", 00:06:28.701 "params": { 00:06:28.701 "name": "static" 00:06:28.701 } 00:06:28.701 } 00:06:28.701 ] 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "subsystem": "vhost_scsi", 00:06:28.701 "config": [] 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "subsystem": "vhost_blk", 00:06:28.701 "config": [] 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "subsystem": "ublk", 00:06:28.701 "config": [] 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "subsystem": "nbd", 00:06:28.701 "config": [] 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "subsystem": "nvmf", 00:06:28.701 "config": [ 00:06:28.701 { 00:06:28.701 "method": "nvmf_set_config", 00:06:28.701 "params": { 00:06:28.701 "discovery_filter": "match_any", 00:06:28.701 "admin_cmd_passthru": { 00:06:28.701 "identify_ctrlr": false 00:06:28.701 }, 00:06:28.701 "dhchap_digests": [ 00:06:28.701 "sha256", 00:06:28.701 "sha384", 00:06:28.701 "sha512" 00:06:28.701 ], 00:06:28.701 "dhchap_dhgroups": [ 00:06:28.701 "null", 00:06:28.701 "ffdhe2048", 00:06:28.701 "ffdhe3072", 00:06:28.701 "ffdhe4096", 00:06:28.701 "ffdhe6144", 00:06:28.701 "ffdhe8192" 00:06:28.701 ] 00:06:28.701 } 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "method": "nvmf_set_max_subsystems", 00:06:28.701 "params": { 00:06:28.701 "max_subsystems": 1024 00:06:28.701 } 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "method": "nvmf_set_crdt", 00:06:28.701 "params": { 00:06:28.701 "crdt1": 0, 00:06:28.701 "crdt2": 0, 00:06:28.701 "crdt3": 0 00:06:28.701 } 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "method": "nvmf_create_transport", 00:06:28.701 "params": { 00:06:28.701 "trtype": "TCP", 00:06:28.701 "max_queue_depth": 128, 00:06:28.701 "max_io_qpairs_per_ctrlr": 127, 00:06:28.701 "in_capsule_data_size": 4096, 00:06:28.701 "max_io_size": 131072, 00:06:28.701 "io_unit_size": 131072, 00:06:28.701 "max_aq_depth": 128, 00:06:28.701 "num_shared_buffers": 511, 00:06:28.701 "buf_cache_size": 4294967295, 00:06:28.701 "dif_insert_or_strip": false, 00:06:28.701 "zcopy": false, 00:06:28.701 "c2h_success": true, 00:06:28.701 "sock_priority": 0, 00:06:28.701 "abort_timeout_sec": 1, 00:06:28.701 "ack_timeout": 0, 00:06:28.701 "data_wr_pool_size": 0 00:06:28.701 } 00:06:28.701 } 00:06:28.701 ] 00:06:28.701 }, 00:06:28.701 { 00:06:28.701 "subsystem": "iscsi", 00:06:28.701 "config": [ 00:06:28.701 { 00:06:28.701 "method": "iscsi_set_options", 00:06:28.701 "params": { 00:06:28.701 "node_base": "iqn.2016-06.io.spdk", 00:06:28.701 "max_sessions": 128, 00:06:28.701 "max_connections_per_session": 2, 00:06:28.701 "max_queue_depth": 64, 00:06:28.701 "default_time2wait": 2, 00:06:28.701 "default_time2retain": 20, 00:06:28.701 "first_burst_length": 8192, 00:06:28.701 "immediate_data": true, 00:06:28.701 "allow_duplicated_isid": false, 00:06:28.701 "error_recovery_level": 0, 00:06:28.701 "nop_timeout": 60, 00:06:28.701 "nop_in_interval": 30, 00:06:28.701 "disable_chap": false, 00:06:28.701 "require_chap": false, 00:06:28.701 "mutual_chap": false, 00:06:28.701 "chap_group": 0, 00:06:28.701 "max_large_datain_per_connection": 64, 00:06:28.701 "max_r2t_per_connection": 4, 00:06:28.701 "pdu_pool_size": 36864, 00:06:28.701 "immediate_data_pool_size": 16384, 00:06:28.701 "data_out_pool_size": 2048 00:06:28.701 } 00:06:28.701 } 00:06:28.701 ] 00:06:28.701 } 00:06:28.701 ] 00:06:28.701 } 00:06:28.701 16:11:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:28.701 16:11:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69249 00:06:28.701 16:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69249 ']' 00:06:28.701 16:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69249 00:06:28.701 16:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:28.701 16:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.701 16:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69249 00:06:28.701 16:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.701 16:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.701 killing process with pid 69249 00:06:28.701 16:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69249' 00:06:28.701 16:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69249 00:06:28.701 16:11:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69249 00:06:28.701 16:11:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69263 00:06:28.701 16:11:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:28.701 16:11:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:33.970 16:11:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69263 00:06:33.970 16:11:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69263 ']' 00:06:33.970 16:11:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69263 00:06:33.970 16:11:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:33.970 16:11:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.970 16:11:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69263 00:06:33.970 16:11:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.970 16:11:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.970 killing process with pid 69263 00:06:33.970 16:11:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69263' 00:06:33.970 16:11:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69263 00:06:33.970 16:11:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69263 00:06:33.970 16:11:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:33.970 16:11:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:33.970 00:06:33.970 real 0m6.117s 00:06:33.970 user 0m5.792s 00:06:33.970 sys 0m0.449s 00:06:33.970 16:11:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.970 ************************************ 00:06:33.970 END TEST skip_rpc_with_json 00:06:33.970 ************************************ 00:06:33.970 16:11:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:34.230 16:11:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:34.230 16:11:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.230 16:11:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.230 16:11:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.230 ************************************ 00:06:34.230 START TEST skip_rpc_with_delay 00:06:34.230 ************************************ 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:34.230 [2024-11-26 16:11:59.726927] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.230 00:06:34.230 real 0m0.085s 00:06:34.230 user 0m0.055s 00:06:34.230 sys 0m0.030s 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.230 16:11:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:34.230 ************************************ 00:06:34.230 END TEST skip_rpc_with_delay 00:06:34.230 ************************************ 00:06:34.230 16:11:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:34.230 16:11:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:34.230 16:11:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:34.230 16:11:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.230 16:11:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.230 16:11:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.230 ************************************ 00:06:34.230 START TEST exit_on_failed_rpc_init 00:06:34.230 ************************************ 00:06:34.230 16:11:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:34.230 16:11:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69373 00:06:34.230 16:11:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69373 00:06:34.230 16:11:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.230 16:11:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69373 ']' 00:06:34.230 16:11:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.230 16:11:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.230 16:11:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.230 16:11:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.230 16:11:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:34.230 [2024-11-26 16:11:59.872287] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:06:34.230 [2024-11-26 16:11:59.872422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69373 ] 00:06:34.489 [2024-11-26 16:12:00.021165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.489 [2024-11-26 16:12:00.042823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.489 [2024-11-26 16:12:00.081922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.748 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.748 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:34.749 16:12:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.749 16:12:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:34.749 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:34.749 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:34.749 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.749 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.749 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.749 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.749 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.749 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.749 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.749 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:34.749 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:34.749 [2024-11-26 16:12:00.274062] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:06:34.749 [2024-11-26 16:12:00.274181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69383 ] 00:06:35.008 [2024-11-26 16:12:00.425558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.008 [2024-11-26 16:12:00.449481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.008 [2024-11-26 16:12:00.449581] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:35.008 [2024-11-26 16:12:00.449599] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:35.008 [2024-11-26 16:12:00.449609] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69373 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69373 ']' 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69373 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69373 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.008 killing process with pid 69373 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69373' 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69373 00:06:35.008 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69373 00:06:35.268 00:06:35.268 real 0m0.954s 00:06:35.268 user 0m1.099s 00:06:35.268 sys 0m0.271s 00:06:35.268 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.268 16:12:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:35.268 ************************************ 00:06:35.268 END TEST exit_on_failed_rpc_init 00:06:35.268 ************************************ 00:06:35.268 16:12:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:35.268 00:06:35.268 real 0m12.821s 00:06:35.268 user 0m12.131s 00:06:35.268 sys 0m1.140s 00:06:35.268 16:12:00 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.268 16:12:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.268 ************************************ 00:06:35.268 END TEST skip_rpc 00:06:35.268 ************************************ 00:06:35.268 16:12:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:35.268 16:12:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.268 16:12:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.268 16:12:00 -- common/autotest_common.sh@10 -- # set +x 00:06:35.268 ************************************ 00:06:35.268 START TEST rpc_client 00:06:35.268 ************************************ 00:06:35.268 16:12:00 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:35.528 * Looking for test storage... 00:06:35.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:35.528 16:12:00 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:35.528 16:12:00 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:35.528 16:12:00 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:35.528 16:12:01 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.528 16:12:01 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:35.528 16:12:01 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.528 16:12:01 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:35.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.528 --rc genhtml_branch_coverage=1 00:06:35.528 --rc genhtml_function_coverage=1 00:06:35.528 --rc genhtml_legend=1 00:06:35.528 --rc geninfo_all_blocks=1 00:06:35.528 --rc geninfo_unexecuted_blocks=1 00:06:35.528 00:06:35.528 ' 00:06:35.528 16:12:01 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:35.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.528 --rc genhtml_branch_coverage=1 00:06:35.528 --rc genhtml_function_coverage=1 00:06:35.528 --rc genhtml_legend=1 00:06:35.528 --rc geninfo_all_blocks=1 00:06:35.528 --rc geninfo_unexecuted_blocks=1 00:06:35.528 00:06:35.528 ' 00:06:35.528 16:12:01 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:35.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.528 --rc genhtml_branch_coverage=1 00:06:35.528 --rc genhtml_function_coverage=1 00:06:35.528 --rc genhtml_legend=1 00:06:35.528 --rc geninfo_all_blocks=1 00:06:35.528 --rc geninfo_unexecuted_blocks=1 00:06:35.528 00:06:35.528 ' 00:06:35.528 16:12:01 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:35.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.528 --rc genhtml_branch_coverage=1 00:06:35.528 --rc genhtml_function_coverage=1 00:06:35.528 --rc genhtml_legend=1 00:06:35.528 --rc geninfo_all_blocks=1 00:06:35.528 --rc geninfo_unexecuted_blocks=1 00:06:35.528 00:06:35.528 ' 00:06:35.528 16:12:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:35.528 OK 00:06:35.528 16:12:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:35.528 00:06:35.528 real 0m0.210s 00:06:35.528 user 0m0.135s 00:06:35.528 sys 0m0.083s 00:06:35.528 16:12:01 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.528 16:12:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:35.528 ************************************ 00:06:35.528 END TEST rpc_client 00:06:35.528 ************************************ 00:06:35.528 16:12:01 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:35.528 16:12:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.528 16:12:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.528 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:06:35.528 ************************************ 00:06:35.528 START TEST json_config 00:06:35.528 ************************************ 00:06:35.528 16:12:01 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:35.528 16:12:01 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:35.528 16:12:01 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:35.528 16:12:01 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:35.788 16:12:01 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:35.788 16:12:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.788 16:12:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.788 16:12:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.788 16:12:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.788 16:12:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.788 16:12:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.788 16:12:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.788 16:12:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.788 16:12:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.788 16:12:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.788 16:12:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.788 16:12:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:35.788 16:12:01 json_config -- scripts/common.sh@345 -- # : 1 00:06:35.788 16:12:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.788 16:12:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.788 16:12:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:35.788 16:12:01 json_config -- scripts/common.sh@353 -- # local d=1 00:06:35.788 16:12:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.788 16:12:01 json_config -- scripts/common.sh@355 -- # echo 1 00:06:35.788 16:12:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.788 16:12:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:35.788 16:12:01 json_config -- scripts/common.sh@353 -- # local d=2 00:06:35.788 16:12:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.788 16:12:01 json_config -- scripts/common.sh@355 -- # echo 2 00:06:35.788 16:12:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.788 16:12:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.788 16:12:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.788 16:12:01 json_config -- scripts/common.sh@368 -- # return 0 00:06:35.788 16:12:01 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.788 16:12:01 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:35.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.788 --rc genhtml_branch_coverage=1 00:06:35.788 --rc genhtml_function_coverage=1 00:06:35.788 --rc genhtml_legend=1 00:06:35.788 --rc geninfo_all_blocks=1 00:06:35.788 --rc geninfo_unexecuted_blocks=1 00:06:35.788 00:06:35.788 ' 00:06:35.789 16:12:01 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:35.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.789 --rc genhtml_branch_coverage=1 00:06:35.789 --rc genhtml_function_coverage=1 00:06:35.789 --rc genhtml_legend=1 00:06:35.789 --rc geninfo_all_blocks=1 00:06:35.789 --rc geninfo_unexecuted_blocks=1 00:06:35.789 00:06:35.789 ' 00:06:35.789 16:12:01 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:35.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.789 --rc genhtml_branch_coverage=1 00:06:35.789 --rc genhtml_function_coverage=1 00:06:35.789 --rc genhtml_legend=1 00:06:35.789 --rc geninfo_all_blocks=1 00:06:35.789 --rc geninfo_unexecuted_blocks=1 00:06:35.789 00:06:35.789 ' 00:06:35.789 16:12:01 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:35.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.789 --rc genhtml_branch_coverage=1 00:06:35.789 --rc genhtml_function_coverage=1 00:06:35.789 --rc genhtml_legend=1 00:06:35.789 --rc geninfo_all_blocks=1 00:06:35.789 --rc geninfo_unexecuted_blocks=1 00:06:35.789 00:06:35.789 ' 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:35.789 16:12:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:35.789 16:12:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.789 16:12:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.789 16:12:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.789 16:12:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.789 16:12:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.789 16:12:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.789 16:12:01 json_config -- paths/export.sh@5 -- # export PATH 00:06:35.789 16:12:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@51 -- # : 0 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:35.789 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:35.789 16:12:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:35.789 INFO: JSON configuration test init 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:35.789 16:12:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.789 16:12:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:35.789 16:12:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.789 16:12:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.789 16:12:01 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:35.789 16:12:01 json_config -- json_config/common.sh@9 -- # local app=target 00:06:35.789 16:12:01 json_config -- json_config/common.sh@10 -- # shift 00:06:35.789 16:12:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:35.789 16:12:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:35.789 16:12:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:35.789 16:12:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.789 16:12:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.789 16:12:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69517 00:06:35.789 Waiting for target to run... 00:06:35.789 16:12:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:35.789 16:12:01 json_config -- json_config/common.sh@25 -- # waitforlisten 69517 /var/tmp/spdk_tgt.sock 00:06:35.789 16:12:01 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:35.789 16:12:01 json_config -- common/autotest_common.sh@835 -- # '[' -z 69517 ']' 00:06:35.789 16:12:01 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:35.789 16:12:01 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:35.789 16:12:01 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:35.789 16:12:01 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.789 16:12:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.789 [2024-11-26 16:12:01.361898] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:06:35.789 [2024-11-26 16:12:01.361992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69517 ] 00:06:36.049 [2024-11-26 16:12:01.681018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.049 [2024-11-26 16:12:01.694614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.987 16:12:02 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.987 16:12:02 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:36.987 00:06:36.987 16:12:02 json_config -- json_config/common.sh@26 -- # echo '' 00:06:36.987 16:12:02 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:36.987 16:12:02 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:36.987 16:12:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:36.987 16:12:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.987 16:12:02 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:36.987 16:12:02 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:36.987 16:12:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.987 16:12:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.987 16:12:02 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:36.987 16:12:02 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:36.987 16:12:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:37.246 [2024-11-26 16:12:02.677015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.246 16:12:02 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:37.246 16:12:02 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:37.246 16:12:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.246 16:12:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.246 16:12:02 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:37.246 16:12:02 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:37.246 16:12:02 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:37.246 16:12:02 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:37.246 16:12:02 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:37.246 16:12:02 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:37.246 16:12:02 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:37.246 16:12:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:37.506 16:12:03 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:37.506 16:12:03 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:37.506 16:12:03 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:37.506 16:12:03 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:37.506 16:12:03 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:37.506 16:12:03 json_config -- json_config/json_config.sh@54 -- # sort 00:06:37.506 16:12:03 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:37.765 16:12:03 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:37.765 16:12:03 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:37.765 16:12:03 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:37.765 16:12:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.765 16:12:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.765 16:12:03 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:37.765 16:12:03 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:37.765 16:12:03 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:37.765 16:12:03 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:37.765 16:12:03 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:37.765 16:12:03 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:37.765 16:12:03 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:37.765 16:12:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.765 16:12:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.765 16:12:03 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:37.765 16:12:03 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:37.765 16:12:03 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:37.765 16:12:03 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:37.765 16:12:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:38.025 MallocForNvmf0 00:06:38.025 16:12:03 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:38.025 16:12:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:38.285 MallocForNvmf1 00:06:38.285 16:12:03 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:38.285 16:12:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:38.544 [2024-11-26 16:12:04.031201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.544 16:12:04 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:38.544 16:12:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:38.803 16:12:04 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:38.803 16:12:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:39.063 16:12:04 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:39.063 16:12:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:39.322 16:12:04 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:39.322 16:12:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:39.581 [2024-11-26 16:12:05.019742] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:39.581 16:12:05 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:39.581 16:12:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.581 16:12:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.581 16:12:05 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:39.581 16:12:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.581 16:12:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.581 16:12:05 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:39.581 16:12:05 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:39.581 16:12:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:39.840 MallocBdevForConfigChangeCheck 00:06:39.840 16:12:05 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:39.840 16:12:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.840 16:12:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.840 16:12:05 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:39.840 16:12:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:40.408 INFO: shutting down applications... 00:06:40.408 16:12:05 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:40.408 16:12:05 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:40.408 16:12:05 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:40.408 16:12:05 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:40.408 16:12:05 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:40.667 Calling clear_iscsi_subsystem 00:06:40.667 Calling clear_nvmf_subsystem 00:06:40.667 Calling clear_nbd_subsystem 00:06:40.667 Calling clear_ublk_subsystem 00:06:40.667 Calling clear_vhost_blk_subsystem 00:06:40.667 Calling clear_vhost_scsi_subsystem 00:06:40.667 Calling clear_bdev_subsystem 00:06:40.667 16:12:06 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:40.667 16:12:06 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:40.667 16:12:06 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:40.667 16:12:06 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:40.667 16:12:06 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:40.667 16:12:06 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:41.235 16:12:06 json_config -- json_config/json_config.sh@352 -- # break 00:06:41.235 16:12:06 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:41.235 16:12:06 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:41.235 16:12:06 json_config -- json_config/common.sh@31 -- # local app=target 00:06:41.235 16:12:06 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:41.235 16:12:06 json_config -- json_config/common.sh@35 -- # [[ -n 69517 ]] 00:06:41.235 16:12:06 json_config -- json_config/common.sh@38 -- # kill -SIGINT 69517 00:06:41.235 16:12:06 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:41.235 16:12:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:41.235 16:12:06 json_config -- json_config/common.sh@41 -- # kill -0 69517 00:06:41.235 16:12:06 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:41.804 16:12:07 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:41.804 16:12:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:41.805 16:12:07 json_config -- json_config/common.sh@41 -- # kill -0 69517 00:06:41.805 16:12:07 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:41.805 16:12:07 json_config -- json_config/common.sh@43 -- # break 00:06:41.805 16:12:07 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:41.805 SPDK target shutdown done 00:06:41.805 16:12:07 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:41.805 INFO: relaunching applications... 00:06:41.805 16:12:07 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:41.805 16:12:07 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:41.805 16:12:07 json_config -- json_config/common.sh@9 -- # local app=target 00:06:41.805 16:12:07 json_config -- json_config/common.sh@10 -- # shift 00:06:41.805 16:12:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:41.805 16:12:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:41.805 16:12:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:41.805 16:12:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.805 16:12:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.805 16:12:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69713 00:06:41.805 Waiting for target to run... 00:06:41.805 16:12:07 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:41.805 16:12:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:41.805 16:12:07 json_config -- json_config/common.sh@25 -- # waitforlisten 69713 /var/tmp/spdk_tgt.sock 00:06:41.805 16:12:07 json_config -- common/autotest_common.sh@835 -- # '[' -z 69713 ']' 00:06:41.805 16:12:07 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:41.805 16:12:07 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:41.805 16:12:07 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:41.805 16:12:07 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.805 16:12:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.805 [2024-11-26 16:12:07.238643] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:06:41.805 [2024-11-26 16:12:07.238779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69713 ] 00:06:42.064 [2024-11-26 16:12:07.562039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.064 [2024-11-26 16:12:07.576846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.064 [2024-11-26 16:12:07.705264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.324 [2024-11-26 16:12:07.901135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.324 [2024-11-26 16:12:07.933206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:42.584 16:12:08 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.584 00:06:42.584 16:12:08 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:42.584 16:12:08 json_config -- json_config/common.sh@26 -- # echo '' 00:06:42.584 16:12:08 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:42.584 INFO: Checking if target configuration is the same... 00:06:42.584 16:12:08 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:42.584 16:12:08 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:42.584 16:12:08 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:42.584 16:12:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:42.584 + '[' 2 -ne 2 ']' 00:06:42.584 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:42.584 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:42.584 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:42.584 +++ basename /dev/fd/62 00:06:42.584 ++ mktemp /tmp/62.XXX 00:06:42.584 + tmp_file_1=/tmp/62.Udq 00:06:42.584 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:42.584 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:42.584 + tmp_file_2=/tmp/spdk_tgt_config.json.9Wo 00:06:42.584 + ret=0 00:06:42.584 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:43.151 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:43.151 + diff -u /tmp/62.Udq /tmp/spdk_tgt_config.json.9Wo 00:06:43.151 INFO: JSON config files are the same 00:06:43.151 + echo 'INFO: JSON config files are the same' 00:06:43.151 + rm /tmp/62.Udq /tmp/spdk_tgt_config.json.9Wo 00:06:43.151 + exit 0 00:06:43.151 16:12:08 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:43.151 INFO: changing configuration and checking if this can be detected... 00:06:43.151 16:12:08 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:43.151 16:12:08 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:43.151 16:12:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:43.410 16:12:08 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:43.410 16:12:08 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:43.410 16:12:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:43.410 + '[' 2 -ne 2 ']' 00:06:43.410 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:43.410 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:43.410 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:43.410 +++ basename /dev/fd/62 00:06:43.410 ++ mktemp /tmp/62.XXX 00:06:43.410 + tmp_file_1=/tmp/62.2Eb 00:06:43.410 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:43.410 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:43.410 + tmp_file_2=/tmp/spdk_tgt_config.json.BAl 00:06:43.410 + ret=0 00:06:43.410 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:43.676 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:43.941 + diff -u /tmp/62.2Eb /tmp/spdk_tgt_config.json.BAl 00:06:43.941 + ret=1 00:06:43.941 + echo '=== Start of file: /tmp/62.2Eb ===' 00:06:43.941 + cat /tmp/62.2Eb 00:06:43.941 + echo '=== End of file: /tmp/62.2Eb ===' 00:06:43.941 + echo '' 00:06:43.941 + echo '=== Start of file: /tmp/spdk_tgt_config.json.BAl ===' 00:06:43.941 + cat /tmp/spdk_tgt_config.json.BAl 00:06:43.941 + echo '=== End of file: /tmp/spdk_tgt_config.json.BAl ===' 00:06:43.941 + echo '' 00:06:43.941 + rm /tmp/62.2Eb /tmp/spdk_tgt_config.json.BAl 00:06:43.941 + exit 1 00:06:43.941 INFO: configuration change detected. 00:06:43.941 16:12:09 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:43.941 16:12:09 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:43.941 16:12:09 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:43.941 16:12:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.941 16:12:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.941 16:12:09 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:43.941 16:12:09 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:43.941 16:12:09 json_config -- json_config/json_config.sh@324 -- # [[ -n 69713 ]] 00:06:43.941 16:12:09 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:43.941 16:12:09 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:43.941 16:12:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.941 16:12:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.941 16:12:09 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:43.941 16:12:09 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:43.941 16:12:09 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:43.941 16:12:09 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:43.941 16:12:09 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:43.941 16:12:09 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:43.941 16:12:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.941 16:12:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.941 16:12:09 json_config -- json_config/json_config.sh@330 -- # killprocess 69713 00:06:43.941 16:12:09 json_config -- common/autotest_common.sh@954 -- # '[' -z 69713 ']' 00:06:43.941 16:12:09 json_config -- common/autotest_common.sh@958 -- # kill -0 69713 00:06:43.941 16:12:09 json_config -- common/autotest_common.sh@959 -- # uname 00:06:43.941 16:12:09 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.941 16:12:09 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69713 00:06:43.941 16:12:09 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.941 16:12:09 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.941 killing process with pid 69713 00:06:43.941 16:12:09 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69713' 00:06:43.941 16:12:09 json_config -- common/autotest_common.sh@973 -- # kill 69713 00:06:43.941 16:12:09 json_config -- common/autotest_common.sh@978 -- # wait 69713 00:06:44.199 16:12:09 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:44.199 16:12:09 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:44.199 16:12:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:44.199 16:12:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.199 16:12:09 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:44.199 INFO: Success 00:06:44.199 16:12:09 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:44.199 00:06:44.199 real 0m8.563s 00:06:44.199 user 0m12.498s 00:06:44.199 sys 0m1.486s 00:06:44.199 16:12:09 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.199 16:12:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.199 ************************************ 00:06:44.199 END TEST json_config 00:06:44.199 ************************************ 00:06:44.199 16:12:09 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:44.199 16:12:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.199 16:12:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.199 16:12:09 -- common/autotest_common.sh@10 -- # set +x 00:06:44.199 ************************************ 00:06:44.199 START TEST json_config_extra_key 00:06:44.199 ************************************ 00:06:44.199 16:12:09 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:44.199 16:12:09 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.199 16:12:09 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.199 16:12:09 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.459 16:12:09 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:44.459 16:12:09 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.459 16:12:09 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.459 --rc genhtml_branch_coverage=1 00:06:44.459 --rc genhtml_function_coverage=1 00:06:44.459 --rc genhtml_legend=1 00:06:44.459 --rc geninfo_all_blocks=1 00:06:44.459 --rc geninfo_unexecuted_blocks=1 00:06:44.459 00:06:44.459 ' 00:06:44.459 16:12:09 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.459 --rc genhtml_branch_coverage=1 00:06:44.459 --rc genhtml_function_coverage=1 00:06:44.459 --rc genhtml_legend=1 00:06:44.459 --rc geninfo_all_blocks=1 00:06:44.459 --rc geninfo_unexecuted_blocks=1 00:06:44.459 00:06:44.459 ' 00:06:44.459 16:12:09 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.459 --rc genhtml_branch_coverage=1 00:06:44.459 --rc genhtml_function_coverage=1 00:06:44.459 --rc genhtml_legend=1 00:06:44.459 --rc geninfo_all_blocks=1 00:06:44.459 --rc geninfo_unexecuted_blocks=1 00:06:44.459 00:06:44.459 ' 00:06:44.459 16:12:09 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.459 --rc genhtml_branch_coverage=1 00:06:44.459 --rc genhtml_function_coverage=1 00:06:44.459 --rc genhtml_legend=1 00:06:44.459 --rc geninfo_all_blocks=1 00:06:44.459 --rc geninfo_unexecuted_blocks=1 00:06:44.459 00:06:44.459 ' 00:06:44.459 16:12:09 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.459 16:12:09 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.459 16:12:09 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.459 16:12:09 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.459 16:12:09 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.459 16:12:09 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:44.459 16:12:09 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.459 16:12:09 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.460 16:12:09 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.460 16:12:09 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.460 16:12:09 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.460 16:12:09 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.460 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.460 16:12:09 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.460 16:12:09 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.460 16:12:09 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.460 16:12:09 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:44.460 16:12:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:44.460 16:12:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:44.460 16:12:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:44.460 16:12:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:44.460 16:12:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:44.460 16:12:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:44.460 16:12:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:44.460 16:12:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:44.460 16:12:09 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:44.460 INFO: launching applications... 00:06:44.460 16:12:09 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:44.460 16:12:09 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:44.460 16:12:09 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:44.460 16:12:09 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:44.460 16:12:09 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:44.460 16:12:09 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:44.460 16:12:09 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:44.460 16:12:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:44.460 16:12:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:44.460 16:12:09 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69867 00:06:44.460 16:12:09 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:44.460 Waiting for target to run... 00:06:44.460 16:12:09 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69867 /var/tmp/spdk_tgt.sock 00:06:44.460 16:12:09 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 69867 ']' 00:06:44.460 16:12:09 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:44.460 16:12:09 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:44.460 16:12:09 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:44.460 16:12:09 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:44.460 16:12:09 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.460 16:12:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:44.460 [2024-11-26 16:12:10.018315] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:06:44.460 [2024-11-26 16:12:10.018447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69867 ] 00:06:44.719 [2024-11-26 16:12:10.348851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.719 [2024-11-26 16:12:10.364844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.987 [2024-11-26 16:12:10.388646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.568 16:12:11 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.569 16:12:11 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:45.569 00:06:45.569 16:12:11 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:45.569 INFO: shutting down applications... 00:06:45.569 16:12:11 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:45.569 16:12:11 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:45.569 16:12:11 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:45.569 16:12:11 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:45.569 16:12:11 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69867 ]] 00:06:45.569 16:12:11 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69867 00:06:45.569 16:12:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:45.569 16:12:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:45.569 16:12:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69867 00:06:45.569 16:12:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:46.137 16:12:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:46.137 16:12:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:46.137 16:12:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69867 00:06:46.137 16:12:11 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:46.137 16:12:11 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:46.137 16:12:11 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:46.137 SPDK target shutdown done 00:06:46.137 16:12:11 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:46.137 Success 00:06:46.137 16:12:11 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:46.137 00:06:46.137 real 0m1.811s 00:06:46.137 user 0m1.613s 00:06:46.137 sys 0m0.370s 00:06:46.137 16:12:11 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.137 16:12:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:46.137 ************************************ 00:06:46.137 END TEST json_config_extra_key 00:06:46.137 ************************************ 00:06:46.137 16:12:11 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:46.137 16:12:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.137 16:12:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.137 16:12:11 -- common/autotest_common.sh@10 -- # set +x 00:06:46.137 ************************************ 00:06:46.137 START TEST alias_rpc 00:06:46.137 ************************************ 00:06:46.137 16:12:11 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:46.137 * Looking for test storage... 00:06:46.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:46.137 16:12:11 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.137 16:12:11 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.137 16:12:11 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.137 16:12:11 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.137 16:12:11 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:46.137 16:12:11 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.137 16:12:11 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.137 --rc genhtml_branch_coverage=1 00:06:46.137 --rc genhtml_function_coverage=1 00:06:46.137 --rc genhtml_legend=1 00:06:46.137 --rc geninfo_all_blocks=1 00:06:46.137 --rc geninfo_unexecuted_blocks=1 00:06:46.137 00:06:46.137 ' 00:06:46.137 16:12:11 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.137 --rc genhtml_branch_coverage=1 00:06:46.137 --rc genhtml_function_coverage=1 00:06:46.137 --rc genhtml_legend=1 00:06:46.137 --rc geninfo_all_blocks=1 00:06:46.137 --rc geninfo_unexecuted_blocks=1 00:06:46.137 00:06:46.137 ' 00:06:46.137 16:12:11 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.137 --rc genhtml_branch_coverage=1 00:06:46.137 --rc genhtml_function_coverage=1 00:06:46.137 --rc genhtml_legend=1 00:06:46.137 --rc geninfo_all_blocks=1 00:06:46.137 --rc geninfo_unexecuted_blocks=1 00:06:46.137 00:06:46.137 ' 00:06:46.137 16:12:11 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.137 --rc genhtml_branch_coverage=1 00:06:46.137 --rc genhtml_function_coverage=1 00:06:46.137 --rc genhtml_legend=1 00:06:46.137 --rc geninfo_all_blocks=1 00:06:46.137 --rc geninfo_unexecuted_blocks=1 00:06:46.137 00:06:46.137 ' 00:06:46.137 16:12:11 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:46.137 16:12:11 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69945 00:06:46.137 16:12:11 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69945 00:06:46.137 16:12:11 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:46.137 16:12:11 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 69945 ']' 00:06:46.137 16:12:11 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.137 16:12:11 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.137 16:12:11 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.137 16:12:11 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.137 16:12:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.396 [2024-11-26 16:12:11.826114] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:06:46.396 [2024-11-26 16:12:11.826232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69945 ] 00:06:46.396 [2024-11-26 16:12:11.974872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.396 [2024-11-26 16:12:11.997668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.396 [2024-11-26 16:12:12.042162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.654 16:12:12 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.654 16:12:12 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.654 16:12:12 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:46.913 16:12:12 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69945 00:06:46.913 16:12:12 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 69945 ']' 00:06:46.913 16:12:12 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 69945 00:06:46.913 16:12:12 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:46.913 16:12:12 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.913 16:12:12 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69945 00:06:46.913 16:12:12 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.913 16:12:12 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.913 killing process with pid 69945 00:06:46.913 16:12:12 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69945' 00:06:46.913 16:12:12 alias_rpc -- common/autotest_common.sh@973 -- # kill 69945 00:06:46.913 16:12:12 alias_rpc -- common/autotest_common.sh@978 -- # wait 69945 00:06:47.172 00:06:47.172 real 0m1.178s 00:06:47.172 user 0m1.435s 00:06:47.172 sys 0m0.334s 00:06:47.172 16:12:12 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.172 ************************************ 00:06:47.172 END TEST alias_rpc 00:06:47.173 ************************************ 00:06:47.173 16:12:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.173 16:12:12 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:47.173 16:12:12 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:47.173 16:12:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.173 16:12:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.173 16:12:12 -- common/autotest_common.sh@10 -- # set +x 00:06:47.173 ************************************ 00:06:47.173 START TEST spdkcli_tcp 00:06:47.173 ************************************ 00:06:47.173 16:12:12 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:47.432 * Looking for test storage... 00:06:47.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:47.432 16:12:12 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.432 16:12:12 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.432 16:12:12 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.432 16:12:12 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.432 16:12:12 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:47.432 16:12:12 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.432 16:12:12 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.432 --rc genhtml_branch_coverage=1 00:06:47.432 --rc genhtml_function_coverage=1 00:06:47.432 --rc genhtml_legend=1 00:06:47.432 --rc geninfo_all_blocks=1 00:06:47.432 --rc geninfo_unexecuted_blocks=1 00:06:47.432 00:06:47.432 ' 00:06:47.432 16:12:12 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.432 --rc genhtml_branch_coverage=1 00:06:47.432 --rc genhtml_function_coverage=1 00:06:47.432 --rc genhtml_legend=1 00:06:47.432 --rc geninfo_all_blocks=1 00:06:47.432 --rc geninfo_unexecuted_blocks=1 00:06:47.432 00:06:47.432 ' 00:06:47.432 16:12:12 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.432 --rc genhtml_branch_coverage=1 00:06:47.432 --rc genhtml_function_coverage=1 00:06:47.432 --rc genhtml_legend=1 00:06:47.432 --rc geninfo_all_blocks=1 00:06:47.432 --rc geninfo_unexecuted_blocks=1 00:06:47.432 00:06:47.432 ' 00:06:47.432 16:12:12 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.432 --rc genhtml_branch_coverage=1 00:06:47.432 --rc genhtml_function_coverage=1 00:06:47.432 --rc genhtml_legend=1 00:06:47.432 --rc geninfo_all_blocks=1 00:06:47.432 --rc geninfo_unexecuted_blocks=1 00:06:47.432 00:06:47.432 ' 00:06:47.432 16:12:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:47.432 16:12:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:47.432 16:12:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:47.432 16:12:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:47.432 16:12:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:47.432 16:12:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:47.432 16:12:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:47.432 16:12:12 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:47.432 16:12:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.432 16:12:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70016 00:06:47.432 16:12:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70016 00:06:47.432 16:12:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:47.433 16:12:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 70016 ']' 00:06:47.433 16:12:12 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.433 16:12:12 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.433 16:12:12 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.433 16:12:12 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.433 16:12:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.433 [2024-11-26 16:12:13.032863] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:06:47.433 [2024-11-26 16:12:13.032968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70016 ] 00:06:47.692 [2024-11-26 16:12:13.172269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.692 [2024-11-26 16:12:13.191968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.692 [2024-11-26 16:12:13.191977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.692 [2024-11-26 16:12:13.228360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.629 16:12:13 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.629 16:12:13 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:48.629 16:12:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:48.629 16:12:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70033 00:06:48.629 16:12:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:48.629 [ 00:06:48.629 "bdev_malloc_delete", 00:06:48.629 "bdev_malloc_create", 00:06:48.629 "bdev_null_resize", 00:06:48.629 "bdev_null_delete", 00:06:48.629 "bdev_null_create", 00:06:48.629 "bdev_nvme_cuse_unregister", 00:06:48.629 "bdev_nvme_cuse_register", 00:06:48.629 "bdev_opal_new_user", 00:06:48.629 "bdev_opal_set_lock_state", 00:06:48.629 "bdev_opal_delete", 00:06:48.629 "bdev_opal_get_info", 00:06:48.629 "bdev_opal_create", 00:06:48.629 "bdev_nvme_opal_revert", 00:06:48.629 "bdev_nvme_opal_init", 00:06:48.629 "bdev_nvme_send_cmd", 00:06:48.629 "bdev_nvme_set_keys", 00:06:48.629 "bdev_nvme_get_path_iostat", 00:06:48.629 "bdev_nvme_get_mdns_discovery_info", 00:06:48.629 "bdev_nvme_stop_mdns_discovery", 00:06:48.629 "bdev_nvme_start_mdns_discovery", 00:06:48.629 "bdev_nvme_set_multipath_policy", 00:06:48.629 "bdev_nvme_set_preferred_path", 00:06:48.629 "bdev_nvme_get_io_paths", 00:06:48.629 "bdev_nvme_remove_error_injection", 00:06:48.629 "bdev_nvme_add_error_injection", 00:06:48.629 "bdev_nvme_get_discovery_info", 00:06:48.629 "bdev_nvme_stop_discovery", 00:06:48.629 "bdev_nvme_start_discovery", 00:06:48.629 "bdev_nvme_get_controller_health_info", 00:06:48.629 "bdev_nvme_disable_controller", 00:06:48.629 "bdev_nvme_enable_controller", 00:06:48.629 "bdev_nvme_reset_controller", 00:06:48.629 "bdev_nvme_get_transport_statistics", 00:06:48.629 "bdev_nvme_apply_firmware", 00:06:48.629 "bdev_nvme_detach_controller", 00:06:48.629 "bdev_nvme_get_controllers", 00:06:48.629 "bdev_nvme_attach_controller", 00:06:48.629 "bdev_nvme_set_hotplug", 00:06:48.629 "bdev_nvme_set_options", 00:06:48.629 "bdev_passthru_delete", 00:06:48.629 "bdev_passthru_create", 00:06:48.629 "bdev_lvol_set_parent_bdev", 00:06:48.629 "bdev_lvol_set_parent", 00:06:48.629 "bdev_lvol_check_shallow_copy", 00:06:48.629 "bdev_lvol_start_shallow_copy", 00:06:48.629 "bdev_lvol_grow_lvstore", 00:06:48.629 "bdev_lvol_get_lvols", 00:06:48.629 "bdev_lvol_get_lvstores", 00:06:48.629 "bdev_lvol_delete", 00:06:48.629 "bdev_lvol_set_read_only", 00:06:48.629 "bdev_lvol_resize", 00:06:48.629 "bdev_lvol_decouple_parent", 00:06:48.629 "bdev_lvol_inflate", 00:06:48.629 "bdev_lvol_rename", 00:06:48.629 "bdev_lvol_clone_bdev", 00:06:48.629 "bdev_lvol_clone", 00:06:48.629 "bdev_lvol_snapshot", 00:06:48.630 "bdev_lvol_create", 00:06:48.630 "bdev_lvol_delete_lvstore", 00:06:48.630 "bdev_lvol_rename_lvstore", 00:06:48.630 "bdev_lvol_create_lvstore", 00:06:48.630 "bdev_raid_set_options", 00:06:48.630 "bdev_raid_remove_base_bdev", 00:06:48.630 "bdev_raid_add_base_bdev", 00:06:48.630 "bdev_raid_delete", 00:06:48.630 "bdev_raid_create", 00:06:48.630 "bdev_raid_get_bdevs", 00:06:48.630 "bdev_error_inject_error", 00:06:48.630 "bdev_error_delete", 00:06:48.630 "bdev_error_create", 00:06:48.630 "bdev_split_delete", 00:06:48.630 "bdev_split_create", 00:06:48.630 "bdev_delay_delete", 00:06:48.630 "bdev_delay_create", 00:06:48.630 "bdev_delay_update_latency", 00:06:48.630 "bdev_zone_block_delete", 00:06:48.630 "bdev_zone_block_create", 00:06:48.630 "blobfs_create", 00:06:48.630 "blobfs_detect", 00:06:48.630 "blobfs_set_cache_size", 00:06:48.630 "bdev_aio_delete", 00:06:48.630 "bdev_aio_rescan", 00:06:48.630 "bdev_aio_create", 00:06:48.630 "bdev_ftl_set_property", 00:06:48.630 "bdev_ftl_get_properties", 00:06:48.630 "bdev_ftl_get_stats", 00:06:48.630 "bdev_ftl_unmap", 00:06:48.630 "bdev_ftl_unload", 00:06:48.630 "bdev_ftl_delete", 00:06:48.630 "bdev_ftl_load", 00:06:48.630 "bdev_ftl_create", 00:06:48.630 "bdev_virtio_attach_controller", 00:06:48.630 "bdev_virtio_scsi_get_devices", 00:06:48.630 "bdev_virtio_detach_controller", 00:06:48.630 "bdev_virtio_blk_set_hotplug", 00:06:48.630 "bdev_iscsi_delete", 00:06:48.630 "bdev_iscsi_create", 00:06:48.630 "bdev_iscsi_set_options", 00:06:48.630 "bdev_uring_delete", 00:06:48.630 "bdev_uring_rescan", 00:06:48.630 "bdev_uring_create", 00:06:48.630 "accel_error_inject_error", 00:06:48.630 "ioat_scan_accel_module", 00:06:48.630 "dsa_scan_accel_module", 00:06:48.630 "iaa_scan_accel_module", 00:06:48.630 "keyring_file_remove_key", 00:06:48.630 "keyring_file_add_key", 00:06:48.630 "keyring_linux_set_options", 00:06:48.630 "fsdev_aio_delete", 00:06:48.630 "fsdev_aio_create", 00:06:48.630 "iscsi_get_histogram", 00:06:48.630 "iscsi_enable_histogram", 00:06:48.630 "iscsi_set_options", 00:06:48.630 "iscsi_get_auth_groups", 00:06:48.630 "iscsi_auth_group_remove_secret", 00:06:48.630 "iscsi_auth_group_add_secret", 00:06:48.630 "iscsi_delete_auth_group", 00:06:48.630 "iscsi_create_auth_group", 00:06:48.630 "iscsi_set_discovery_auth", 00:06:48.630 "iscsi_get_options", 00:06:48.630 "iscsi_target_node_request_logout", 00:06:48.630 "iscsi_target_node_set_redirect", 00:06:48.630 "iscsi_target_node_set_auth", 00:06:48.630 "iscsi_target_node_add_lun", 00:06:48.630 "iscsi_get_stats", 00:06:48.630 "iscsi_get_connections", 00:06:48.630 "iscsi_portal_group_set_auth", 00:06:48.630 "iscsi_start_portal_group", 00:06:48.630 "iscsi_delete_portal_group", 00:06:48.630 "iscsi_create_portal_group", 00:06:48.630 "iscsi_get_portal_groups", 00:06:48.630 "iscsi_delete_target_node", 00:06:48.630 "iscsi_target_node_remove_pg_ig_maps", 00:06:48.630 "iscsi_target_node_add_pg_ig_maps", 00:06:48.630 "iscsi_create_target_node", 00:06:48.630 "iscsi_get_target_nodes", 00:06:48.630 "iscsi_delete_initiator_group", 00:06:48.630 "iscsi_initiator_group_remove_initiators", 00:06:48.630 "iscsi_initiator_group_add_initiators", 00:06:48.630 "iscsi_create_initiator_group", 00:06:48.630 "iscsi_get_initiator_groups", 00:06:48.630 "nvmf_set_crdt", 00:06:48.630 "nvmf_set_config", 00:06:48.630 "nvmf_set_max_subsystems", 00:06:48.630 "nvmf_stop_mdns_prr", 00:06:48.630 "nvmf_publish_mdns_prr", 00:06:48.630 "nvmf_subsystem_get_listeners", 00:06:48.630 "nvmf_subsystem_get_qpairs", 00:06:48.630 "nvmf_subsystem_get_controllers", 00:06:48.630 "nvmf_get_stats", 00:06:48.630 "nvmf_get_transports", 00:06:48.630 "nvmf_create_transport", 00:06:48.630 "nvmf_get_targets", 00:06:48.630 "nvmf_delete_target", 00:06:48.630 "nvmf_create_target", 00:06:48.630 "nvmf_subsystem_allow_any_host", 00:06:48.630 "nvmf_subsystem_set_keys", 00:06:48.630 "nvmf_subsystem_remove_host", 00:06:48.630 "nvmf_subsystem_add_host", 00:06:48.630 "nvmf_ns_remove_host", 00:06:48.630 "nvmf_ns_add_host", 00:06:48.630 "nvmf_subsystem_remove_ns", 00:06:48.630 "nvmf_subsystem_set_ns_ana_group", 00:06:48.630 "nvmf_subsystem_add_ns", 00:06:48.630 "nvmf_subsystem_listener_set_ana_state", 00:06:48.630 "nvmf_discovery_get_referrals", 00:06:48.630 "nvmf_discovery_remove_referral", 00:06:48.630 "nvmf_discovery_add_referral", 00:06:48.630 "nvmf_subsystem_remove_listener", 00:06:48.630 "nvmf_subsystem_add_listener", 00:06:48.630 "nvmf_delete_subsystem", 00:06:48.630 "nvmf_create_subsystem", 00:06:48.630 "nvmf_get_subsystems", 00:06:48.630 "env_dpdk_get_mem_stats", 00:06:48.630 "nbd_get_disks", 00:06:48.630 "nbd_stop_disk", 00:06:48.630 "nbd_start_disk", 00:06:48.630 "ublk_recover_disk", 00:06:48.630 "ublk_get_disks", 00:06:48.630 "ublk_stop_disk", 00:06:48.630 "ublk_start_disk", 00:06:48.630 "ublk_destroy_target", 00:06:48.630 "ublk_create_target", 00:06:48.630 "virtio_blk_create_transport", 00:06:48.630 "virtio_blk_get_transports", 00:06:48.630 "vhost_controller_set_coalescing", 00:06:48.630 "vhost_get_controllers", 00:06:48.630 "vhost_delete_controller", 00:06:48.630 "vhost_create_blk_controller", 00:06:48.630 "vhost_scsi_controller_remove_target", 00:06:48.630 "vhost_scsi_controller_add_target", 00:06:48.630 "vhost_start_scsi_controller", 00:06:48.630 "vhost_create_scsi_controller", 00:06:48.630 "thread_set_cpumask", 00:06:48.630 "scheduler_set_options", 00:06:48.630 "framework_get_governor", 00:06:48.630 "framework_get_scheduler", 00:06:48.630 "framework_set_scheduler", 00:06:48.630 "framework_get_reactors", 00:06:48.630 "thread_get_io_channels", 00:06:48.630 "thread_get_pollers", 00:06:48.630 "thread_get_stats", 00:06:48.630 "framework_monitor_context_switch", 00:06:48.630 "spdk_kill_instance", 00:06:48.630 "log_enable_timestamps", 00:06:48.630 "log_get_flags", 00:06:48.630 "log_clear_flag", 00:06:48.630 "log_set_flag", 00:06:48.630 "log_get_level", 00:06:48.630 "log_set_level", 00:06:48.630 "log_get_print_level", 00:06:48.630 "log_set_print_level", 00:06:48.630 "framework_enable_cpumask_locks", 00:06:48.630 "framework_disable_cpumask_locks", 00:06:48.630 "framework_wait_init", 00:06:48.630 "framework_start_init", 00:06:48.630 "scsi_get_devices", 00:06:48.630 "bdev_get_histogram", 00:06:48.630 "bdev_enable_histogram", 00:06:48.630 "bdev_set_qos_limit", 00:06:48.630 "bdev_set_qd_sampling_period", 00:06:48.630 "bdev_get_bdevs", 00:06:48.630 "bdev_reset_iostat", 00:06:48.630 "bdev_get_iostat", 00:06:48.630 "bdev_examine", 00:06:48.630 "bdev_wait_for_examine", 00:06:48.630 "bdev_set_options", 00:06:48.630 "accel_get_stats", 00:06:48.630 "accel_set_options", 00:06:48.630 "accel_set_driver", 00:06:48.630 "accel_crypto_key_destroy", 00:06:48.630 "accel_crypto_keys_get", 00:06:48.630 "accel_crypto_key_create", 00:06:48.630 "accel_assign_opc", 00:06:48.630 "accel_get_module_info", 00:06:48.630 "accel_get_opc_assignments", 00:06:48.630 "vmd_rescan", 00:06:48.630 "vmd_remove_device", 00:06:48.630 "vmd_enable", 00:06:48.630 "sock_get_default_impl", 00:06:48.630 "sock_set_default_impl", 00:06:48.630 "sock_impl_set_options", 00:06:48.630 "sock_impl_get_options", 00:06:48.630 "iobuf_get_stats", 00:06:48.630 "iobuf_set_options", 00:06:48.630 "keyring_get_keys", 00:06:48.630 "framework_get_pci_devices", 00:06:48.630 "framework_get_config", 00:06:48.630 "framework_get_subsystems", 00:06:48.630 "fsdev_set_opts", 00:06:48.630 "fsdev_get_opts", 00:06:48.630 "trace_get_info", 00:06:48.630 "trace_get_tpoint_group_mask", 00:06:48.630 "trace_disable_tpoint_group", 00:06:48.630 "trace_enable_tpoint_group", 00:06:48.630 "trace_clear_tpoint_mask", 00:06:48.630 "trace_set_tpoint_mask", 00:06:48.630 "notify_get_notifications", 00:06:48.630 "notify_get_types", 00:06:48.630 "spdk_get_version", 00:06:48.630 "rpc_get_methods" 00:06:48.630 ] 00:06:48.889 16:12:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:48.889 16:12:14 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:48.889 16:12:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.889 16:12:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:48.889 16:12:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70016 00:06:48.889 16:12:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 70016 ']' 00:06:48.889 16:12:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 70016 00:06:48.889 16:12:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:48.889 16:12:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.889 16:12:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70016 00:06:48.889 16:12:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.889 killing process with pid 70016 00:06:48.889 16:12:14 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.889 16:12:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70016' 00:06:48.889 16:12:14 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 70016 00:06:48.889 16:12:14 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 70016 00:06:49.148 00:06:49.148 real 0m1.762s 00:06:49.148 user 0m3.474s 00:06:49.148 sys 0m0.364s 00:06:49.148 16:12:14 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.148 16:12:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.148 ************************************ 00:06:49.148 END TEST spdkcli_tcp 00:06:49.148 ************************************ 00:06:49.148 16:12:14 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:49.148 16:12:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.148 16:12:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.148 16:12:14 -- common/autotest_common.sh@10 -- # set +x 00:06:49.148 ************************************ 00:06:49.148 START TEST dpdk_mem_utility 00:06:49.148 ************************************ 00:06:49.148 16:12:14 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:49.148 * Looking for test storage... 00:06:49.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:49.148 16:12:14 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.148 16:12:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.148 16:12:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.148 16:12:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.148 16:12:14 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:49.408 16:12:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:49.408 16:12:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.408 16:12:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:49.408 16:12:14 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.408 16:12:14 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.408 16:12:14 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.408 16:12:14 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:49.408 16:12:14 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.408 16:12:14 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.408 --rc genhtml_branch_coverage=1 00:06:49.408 --rc genhtml_function_coverage=1 00:06:49.408 --rc genhtml_legend=1 00:06:49.408 --rc geninfo_all_blocks=1 00:06:49.408 --rc geninfo_unexecuted_blocks=1 00:06:49.408 00:06:49.408 ' 00:06:49.408 16:12:14 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.408 --rc genhtml_branch_coverage=1 00:06:49.408 --rc genhtml_function_coverage=1 00:06:49.408 --rc genhtml_legend=1 00:06:49.408 --rc geninfo_all_blocks=1 00:06:49.408 --rc geninfo_unexecuted_blocks=1 00:06:49.408 00:06:49.408 ' 00:06:49.408 16:12:14 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.408 --rc genhtml_branch_coverage=1 00:06:49.408 --rc genhtml_function_coverage=1 00:06:49.408 --rc genhtml_legend=1 00:06:49.408 --rc geninfo_all_blocks=1 00:06:49.408 --rc geninfo_unexecuted_blocks=1 00:06:49.408 00:06:49.408 ' 00:06:49.408 16:12:14 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.408 --rc genhtml_branch_coverage=1 00:06:49.408 --rc genhtml_function_coverage=1 00:06:49.408 --rc genhtml_legend=1 00:06:49.408 --rc geninfo_all_blocks=1 00:06:49.408 --rc geninfo_unexecuted_blocks=1 00:06:49.408 00:06:49.408 ' 00:06:49.408 16:12:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:49.408 16:12:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70109 00:06:49.408 16:12:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.408 16:12:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70109 00:06:49.408 16:12:14 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 70109 ']' 00:06:49.408 16:12:14 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.408 16:12:14 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.408 16:12:14 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.408 16:12:14 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.408 16:12:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:49.408 [2024-11-26 16:12:14.880656] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:06:49.408 [2024-11-26 16:12:14.880797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70109 ] 00:06:49.408 [2024-11-26 16:12:15.031126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.667 [2024-11-26 16:12:15.055139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.667 [2024-11-26 16:12:15.096788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.667 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.667 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:49.667 16:12:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:49.667 16:12:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:49.667 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.667 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:49.667 { 00:06:49.667 "filename": "/tmp/spdk_mem_dump.txt" 00:06:49.667 } 00:06:49.667 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.667 16:12:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:49.667 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:49.668 1 heaps totaling size 810.000000 MiB 00:06:49.668 size: 810.000000 MiB heap id: 0 00:06:49.668 end heaps---------- 00:06:49.668 9 mempools totaling size 595.772034 MiB 00:06:49.668 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:49.668 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:49.668 size: 92.545471 MiB name: bdev_io_70109 00:06:49.668 size: 50.003479 MiB name: msgpool_70109 00:06:49.668 size: 36.509338 MiB name: fsdev_io_70109 00:06:49.668 size: 21.763794 MiB name: PDU_Pool 00:06:49.668 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:49.668 size: 4.133484 MiB name: evtpool_70109 00:06:49.668 size: 0.026123 MiB name: Session_Pool 00:06:49.668 end mempools------- 00:06:49.668 6 memzones totaling size 4.142822 MiB 00:06:49.668 size: 1.000366 MiB name: RG_ring_0_70109 00:06:49.668 size: 1.000366 MiB name: RG_ring_1_70109 00:06:49.668 size: 1.000366 MiB name: RG_ring_4_70109 00:06:49.668 size: 1.000366 MiB name: RG_ring_5_70109 00:06:49.668 size: 0.125366 MiB name: RG_ring_2_70109 00:06:49.668 size: 0.015991 MiB name: RG_ring_3_70109 00:06:49.668 end memzones------- 00:06:49.668 16:12:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:49.928 heap id: 0 total size: 810.000000 MiB number of busy elements: 330 number of free elements: 15 00:06:49.928 list of free elements. size: 10.810120 MiB 00:06:49.928 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:49.928 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:49.928 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:49.928 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:49.928 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:49.928 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:49.928 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:49.928 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:49.928 element at address: 0x20001a600000 with size: 0.564575 MiB 00:06:49.928 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:49.928 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:49.928 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:49.928 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:49.928 element at address: 0x200027a00000 with size: 0.395752 MiB 00:06:49.928 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:49.928 list of standard malloc elements. size: 199.270996 MiB 00:06:49.928 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:49.928 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:49.928 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:49.928 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:49.928 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:49.928 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:49.928 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:49.928 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:49.928 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:49.928 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:49.928 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:49.928 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:49.928 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:49.928 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:49.928 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:49.928 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:49.928 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:49.928 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:49.928 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:49.929 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:49.929 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a690880 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a690940 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a690a00 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a690ac0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a690b80 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a690c40 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a690d00 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a690dc0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a690e80 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a690f40 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a691000 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a6910c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a691180 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a691240 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a691300 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a691480 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a691540 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a691600 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a691780 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a691840 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:49.929 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:49.930 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a65500 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:49.930 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:49.931 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:49.931 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:49.931 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:49.931 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:49.931 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:49.931 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:49.931 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:49.931 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:49.931 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:49.931 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:49.931 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:49.931 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:49.931 list of memzone associated elements. size: 599.918884 MiB 00:06:49.931 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:49.931 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:49.931 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:49.931 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:49.931 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:49.931 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70109_0 00:06:49.931 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:49.931 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70109_0 00:06:49.931 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:49.931 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70109_0 00:06:49.931 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:49.931 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:49.931 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:49.931 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:49.931 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:49.931 associated memzone info: size: 3.000122 MiB name: MP_evtpool_70109_0 00:06:49.931 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:49.931 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70109 00:06:49.931 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:49.931 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70109 00:06:49.931 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:49.931 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:49.931 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:49.931 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:49.931 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:49.931 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:49.931 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:49.931 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:49.931 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:49.931 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70109 00:06:49.931 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:49.931 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70109 00:06:49.931 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:49.931 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70109 00:06:49.931 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:49.931 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70109 00:06:49.931 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:49.931 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70109 00:06:49.931 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:49.931 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70109 00:06:49.931 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:49.931 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:49.931 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:49.931 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:49.931 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:49.931 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:49.931 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:49.931 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_70109 00:06:49.931 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:49.931 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70109 00:06:49.931 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:49.931 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:49.931 element at address: 0x200027a65680 with size: 0.023743 MiB 00:06:49.931 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:49.931 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:49.931 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70109 00:06:49.931 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:06:49.931 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:49.931 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:49.931 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70109 00:06:49.931 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:49.931 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70109 00:06:49.931 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:49.931 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70109 00:06:49.931 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:06:49.931 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:49.931 16:12:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:49.931 16:12:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70109 00:06:49.931 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 70109 ']' 00:06:49.931 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 70109 00:06:49.931 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:49.931 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.931 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70109 00:06:49.931 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.931 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.931 killing process with pid 70109 00:06:49.931 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70109' 00:06:49.931 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 70109 00:06:49.931 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 70109 00:06:50.190 00:06:50.191 real 0m0.971s 00:06:50.191 user 0m1.004s 00:06:50.191 sys 0m0.329s 00:06:50.191 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.191 16:12:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:50.191 ************************************ 00:06:50.191 END TEST dpdk_mem_utility 00:06:50.191 ************************************ 00:06:50.191 16:12:15 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:50.191 16:12:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.191 16:12:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.191 16:12:15 -- common/autotest_common.sh@10 -- # set +x 00:06:50.191 ************************************ 00:06:50.191 START TEST event 00:06:50.191 ************************************ 00:06:50.191 16:12:15 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:50.191 * Looking for test storage... 00:06:50.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:50.191 16:12:15 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.191 16:12:15 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.191 16:12:15 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.191 16:12:15 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.191 16:12:15 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.191 16:12:15 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.191 16:12:15 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.191 16:12:15 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.191 16:12:15 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.191 16:12:15 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.191 16:12:15 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.191 16:12:15 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.191 16:12:15 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.191 16:12:15 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.191 16:12:15 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.191 16:12:15 event -- scripts/common.sh@344 -- # case "$op" in 00:06:50.191 16:12:15 event -- scripts/common.sh@345 -- # : 1 00:06:50.191 16:12:15 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.191 16:12:15 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.191 16:12:15 event -- scripts/common.sh@365 -- # decimal 1 00:06:50.191 16:12:15 event -- scripts/common.sh@353 -- # local d=1 00:06:50.191 16:12:15 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.191 16:12:15 event -- scripts/common.sh@355 -- # echo 1 00:06:50.191 16:12:15 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.191 16:12:15 event -- scripts/common.sh@366 -- # decimal 2 00:06:50.191 16:12:15 event -- scripts/common.sh@353 -- # local d=2 00:06:50.191 16:12:15 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.191 16:12:15 event -- scripts/common.sh@355 -- # echo 2 00:06:50.191 16:12:15 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.191 16:12:15 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.191 16:12:15 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.191 16:12:15 event -- scripts/common.sh@368 -- # return 0 00:06:50.191 16:12:15 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.191 16:12:15 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.191 --rc genhtml_branch_coverage=1 00:06:50.191 --rc genhtml_function_coverage=1 00:06:50.191 --rc genhtml_legend=1 00:06:50.191 --rc geninfo_all_blocks=1 00:06:50.191 --rc geninfo_unexecuted_blocks=1 00:06:50.191 00:06:50.191 ' 00:06:50.191 16:12:15 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.191 --rc genhtml_branch_coverage=1 00:06:50.191 --rc genhtml_function_coverage=1 00:06:50.191 --rc genhtml_legend=1 00:06:50.191 --rc geninfo_all_blocks=1 00:06:50.191 --rc geninfo_unexecuted_blocks=1 00:06:50.191 00:06:50.191 ' 00:06:50.191 16:12:15 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.191 --rc genhtml_branch_coverage=1 00:06:50.191 --rc genhtml_function_coverage=1 00:06:50.191 --rc genhtml_legend=1 00:06:50.191 --rc geninfo_all_blocks=1 00:06:50.191 --rc geninfo_unexecuted_blocks=1 00:06:50.191 00:06:50.191 ' 00:06:50.191 16:12:15 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.191 --rc genhtml_branch_coverage=1 00:06:50.191 --rc genhtml_function_coverage=1 00:06:50.191 --rc genhtml_legend=1 00:06:50.191 --rc geninfo_all_blocks=1 00:06:50.191 --rc geninfo_unexecuted_blocks=1 00:06:50.191 00:06:50.191 ' 00:06:50.191 16:12:15 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:50.191 16:12:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:50.191 16:12:15 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:50.191 16:12:15 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:50.191 16:12:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.191 16:12:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.191 ************************************ 00:06:50.191 START TEST event_perf 00:06:50.191 ************************************ 00:06:50.191 16:12:15 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:50.191 Running I/O for 1 seconds...[2024-11-26 16:12:15.829477] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:06:50.191 [2024-11-26 16:12:15.829611] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70181 ] 00:06:50.450 [2024-11-26 16:12:15.973029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.450 [2024-11-26 16:12:15.993524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.450 [2024-11-26 16:12:15.993657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.450 [2024-11-26 16:12:15.993772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.450 Running I/O for 1 seconds...[2024-11-26 16:12:15.993773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.386 00:06:51.386 lcore 0: 195939 00:06:51.386 lcore 1: 195938 00:06:51.386 lcore 2: 195938 00:06:51.386 lcore 3: 195937 00:06:51.386 done. 00:06:51.386 00:06:51.386 real 0m1.210s 00:06:51.386 user 0m4.052s 00:06:51.386 sys 0m0.037s 00:06:51.386 16:12:17 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.386 16:12:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:51.386 ************************************ 00:06:51.386 END TEST event_perf 00:06:51.386 ************************************ 00:06:51.645 16:12:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:51.645 16:12:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:51.645 16:12:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.645 16:12:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.645 ************************************ 00:06:51.645 START TEST event_reactor 00:06:51.645 ************************************ 00:06:51.645 16:12:17 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:51.645 [2024-11-26 16:12:17.095091] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:06:51.645 [2024-11-26 16:12:17.095186] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70220 ] 00:06:51.645 [2024-11-26 16:12:17.233602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.645 [2024-11-26 16:12:17.251367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.022 test_start 00:06:53.022 oneshot 00:06:53.022 tick 100 00:06:53.022 tick 100 00:06:53.022 tick 250 00:06:53.022 tick 100 00:06:53.022 tick 100 00:06:53.022 tick 100 00:06:53.022 tick 250 00:06:53.022 tick 500 00:06:53.022 tick 100 00:06:53.022 tick 100 00:06:53.022 tick 250 00:06:53.022 tick 100 00:06:53.022 tick 100 00:06:53.022 test_end 00:06:53.022 00:06:53.022 real 0m1.200s 00:06:53.022 user 0m1.069s 00:06:53.022 sys 0m0.026s 00:06:53.022 16:12:18 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.022 ************************************ 00:06:53.022 END TEST event_reactor 00:06:53.022 ************************************ 00:06:53.022 16:12:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:53.022 16:12:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:53.022 16:12:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:53.022 16:12:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.022 16:12:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.022 ************************************ 00:06:53.022 START TEST event_reactor_perf 00:06:53.022 ************************************ 00:06:53.022 16:12:18 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:53.022 [2024-11-26 16:12:18.348911] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:06:53.022 [2024-11-26 16:12:18.349006] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70250 ] 00:06:53.022 [2024-11-26 16:12:18.485943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.022 [2024-11-26 16:12:18.503831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.958 test_start 00:06:53.958 test_end 00:06:53.958 Performance: 435360 events per second 00:06:53.958 00:06:53.958 real 0m1.199s 00:06:53.958 user 0m1.063s 00:06:53.958 sys 0m0.032s 00:06:53.958 16:12:19 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.958 ************************************ 00:06:53.958 END TEST event_reactor_perf 00:06:53.958 ************************************ 00:06:53.958 16:12:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.958 16:12:19 event -- event/event.sh@49 -- # uname -s 00:06:53.958 16:12:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:53.958 16:12:19 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:53.958 16:12:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.958 16:12:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.958 16:12:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.958 ************************************ 00:06:53.958 START TEST event_scheduler 00:06:53.958 ************************************ 00:06:53.958 16:12:19 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:54.217 * Looking for test storage... 00:06:54.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:54.217 16:12:19 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.217 16:12:19 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.217 16:12:19 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.217 16:12:19 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.217 16:12:19 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:54.217 16:12:19 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.217 16:12:19 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.217 --rc genhtml_branch_coverage=1 00:06:54.217 --rc genhtml_function_coverage=1 00:06:54.217 --rc genhtml_legend=1 00:06:54.217 --rc geninfo_all_blocks=1 00:06:54.217 --rc geninfo_unexecuted_blocks=1 00:06:54.217 00:06:54.217 ' 00:06:54.217 16:12:19 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.217 --rc genhtml_branch_coverage=1 00:06:54.217 --rc genhtml_function_coverage=1 00:06:54.217 --rc genhtml_legend=1 00:06:54.217 --rc geninfo_all_blocks=1 00:06:54.217 --rc geninfo_unexecuted_blocks=1 00:06:54.217 00:06:54.217 ' 00:06:54.217 16:12:19 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.217 --rc genhtml_branch_coverage=1 00:06:54.217 --rc genhtml_function_coverage=1 00:06:54.217 --rc genhtml_legend=1 00:06:54.217 --rc geninfo_all_blocks=1 00:06:54.217 --rc geninfo_unexecuted_blocks=1 00:06:54.217 00:06:54.217 ' 00:06:54.217 16:12:19 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.217 --rc genhtml_branch_coverage=1 00:06:54.217 --rc genhtml_function_coverage=1 00:06:54.217 --rc genhtml_legend=1 00:06:54.217 --rc geninfo_all_blocks=1 00:06:54.217 --rc geninfo_unexecuted_blocks=1 00:06:54.217 00:06:54.217 ' 00:06:54.217 16:12:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:54.217 16:12:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70319 00:06:54.217 16:12:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:54.217 16:12:19 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.217 16:12:19 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70319 00:06:54.217 16:12:19 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70319 ']' 00:06:54.217 16:12:19 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.217 16:12:19 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.217 16:12:19 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.217 16:12:19 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.217 16:12:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.217 [2024-11-26 16:12:19.823600] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:06:54.217 [2024-11-26 16:12:19.823695] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70319 ] 00:06:54.476 [2024-11-26 16:12:19.970120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.476 [2024-11-26 16:12:19.997448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.476 [2024-11-26 16:12:19.997572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.476 [2024-11-26 16:12:19.997710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.476 [2024-11-26 16:12:19.997712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.476 16:12:20 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.476 16:12:20 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:54.476 16:12:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:54.476 16:12:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.476 16:12:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.476 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.476 POWER: Cannot set governor of lcore 0 to userspace 00:06:54.476 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.476 POWER: Cannot set governor of lcore 0 to performance 00:06:54.476 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.476 POWER: Cannot set governor of lcore 0 to userspace 00:06:54.476 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:54.476 POWER: Unable to set Power Management Environment for lcore 0 00:06:54.476 [2024-11-26 16:12:20.082697] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:54.476 [2024-11-26 16:12:20.082713] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:54.476 [2024-11-26 16:12:20.082755] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:54.476 [2024-11-26 16:12:20.082776] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:54.476 [2024-11-26 16:12:20.082786] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:54.476 [2024-11-26 16:12:20.082794] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:54.476 16:12:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.476 16:12:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:54.476 16:12:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.476 16:12:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.735 [2024-11-26 16:12:20.123880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.735 [2024-11-26 16:12:20.141486] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:54.735 16:12:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.735 16:12:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:54.735 16:12:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.735 16:12:20 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.735 16:12:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.735 ************************************ 00:06:54.735 START TEST scheduler_create_thread 00:06:54.735 ************************************ 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.735 2 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.735 3 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.735 4 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.735 5 00:06:54.735 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.736 6 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.736 7 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.736 8 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.736 9 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.736 10 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.736 16:12:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.111 16:12:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.111 16:12:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:56.111 16:12:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:56.111 16:12:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.111 16:12:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.487 16:12:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.487 00:06:57.487 real 0m2.612s 00:06:57.487 user 0m0.013s 00:06:57.487 sys 0m0.007s 00:06:57.487 ************************************ 00:06:57.487 END TEST scheduler_create_thread 00:06:57.487 ************************************ 00:06:57.487 16:12:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.487 16:12:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.487 16:12:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:57.487 16:12:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70319 00:06:57.487 16:12:22 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70319 ']' 00:06:57.487 16:12:22 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70319 00:06:57.487 16:12:22 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:57.487 16:12:22 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.487 16:12:22 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70319 00:06:57.487 16:12:22 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:57.487 16:12:22 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:57.487 killing process with pid 70319 00:06:57.487 16:12:22 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70319' 00:06:57.487 16:12:22 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70319 00:06:57.487 16:12:22 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70319 00:06:57.746 [2024-11-26 16:12:23.244583] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:57.746 00:06:57.746 real 0m3.777s 00:06:57.746 user 0m5.688s 00:06:57.746 sys 0m0.295s 00:06:57.746 16:12:23 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.746 16:12:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:57.746 ************************************ 00:06:57.746 END TEST event_scheduler 00:06:57.746 ************************************ 00:06:58.006 16:12:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:58.006 16:12:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:58.006 16:12:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.006 16:12:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.006 16:12:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.006 ************************************ 00:06:58.006 START TEST app_repeat 00:06:58.006 ************************************ 00:06:58.006 16:12:23 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:58.006 16:12:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.006 16:12:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.006 16:12:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:58.006 16:12:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.006 16:12:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:58.006 16:12:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:58.006 16:12:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:58.006 16:12:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70406 00:06:58.006 16:12:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:58.006 16:12:23 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:58.006 Process app_repeat pid: 70406 00:06:58.006 16:12:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70406' 00:06:58.006 16:12:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:58.006 spdk_app_start Round 0 00:06:58.006 16:12:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:58.006 16:12:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70406 /var/tmp/spdk-nbd.sock 00:06:58.006 16:12:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70406 ']' 00:06:58.006 16:12:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.006 16:12:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.006 16:12:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.006 16:12:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.006 16:12:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:58.006 [2024-11-26 16:12:23.454979] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:06:58.006 [2024-11-26 16:12:23.455077] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70406 ] 00:06:58.006 [2024-11-26 16:12:23.598564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.006 [2024-11-26 16:12:23.618169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.006 [2024-11-26 16:12:23.618176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.006 [2024-11-26 16:12:23.646699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.266 16:12:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.266 16:12:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:58.266 16:12:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.525 Malloc0 00:06:58.525 16:12:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.784 Malloc1 00:06:58.784 16:12:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.784 16:12:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.784 16:12:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.784 16:12:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.784 16:12:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.784 16:12:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.784 16:12:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.784 16:12:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.784 16:12:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.784 16:12:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.784 16:12:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.784 16:12:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.784 16:12:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.784 16:12:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.784 16:12:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.784 16:12:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:59.044 /dev/nbd0 00:06:59.044 16:12:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:59.044 16:12:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:59.044 16:12:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:59.044 16:12:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:59.044 16:12:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.044 16:12:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.044 16:12:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:59.044 16:12:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:59.044 16:12:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.044 16:12:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.044 16:12:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.044 1+0 records in 00:06:59.044 1+0 records out 00:06:59.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293587 s, 14.0 MB/s 00:06:59.044 16:12:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.044 16:12:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:59.044 16:12:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.044 16:12:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.044 16:12:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:59.044 16:12:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.044 16:12:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.044 16:12:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:59.303 /dev/nbd1 00:06:59.303 16:12:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.303 16:12:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.304 16:12:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:59.304 16:12:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:59.304 16:12:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.304 16:12:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.304 16:12:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:59.304 16:12:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:59.304 16:12:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.304 16:12:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.304 16:12:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.304 1+0 records in 00:06:59.304 1+0 records out 00:06:59.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208429 s, 19.7 MB/s 00:06:59.304 16:12:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.304 16:12:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:59.304 16:12:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.304 16:12:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.304 16:12:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:59.304 16:12:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.304 16:12:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.304 16:12:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.304 16:12:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.304 16:12:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.563 { 00:06:59.563 "nbd_device": "/dev/nbd0", 00:06:59.563 "bdev_name": "Malloc0" 00:06:59.563 }, 00:06:59.563 { 00:06:59.563 "nbd_device": "/dev/nbd1", 00:06:59.563 "bdev_name": "Malloc1" 00:06:59.563 } 00:06:59.563 ]' 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.563 { 00:06:59.563 "nbd_device": "/dev/nbd0", 00:06:59.563 "bdev_name": "Malloc0" 00:06:59.563 }, 00:06:59.563 { 00:06:59.563 "nbd_device": "/dev/nbd1", 00:06:59.563 "bdev_name": "Malloc1" 00:06:59.563 } 00:06:59.563 ]' 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.563 /dev/nbd1' 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.563 /dev/nbd1' 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.563 16:12:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.822 256+0 records in 00:06:59.822 256+0 records out 00:06:59.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106424 s, 98.5 MB/s 00:06:59.822 16:12:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.822 16:12:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.822 256+0 records in 00:06:59.822 256+0 records out 00:06:59.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258881 s, 40.5 MB/s 00:06:59.822 16:12:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.822 16:12:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.822 256+0 records in 00:06:59.822 256+0 records out 00:06:59.822 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026079 s, 40.2 MB/s 00:06:59.822 16:12:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.822 16:12:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.822 16:12:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.822 16:12:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.822 16:12:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.822 16:12:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.822 16:12:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.822 16:12:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.822 16:12:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.822 16:12:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.822 16:12:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.823 16:12:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.823 16:12:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.823 16:12:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.823 16:12:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.823 16:12:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.823 16:12:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.823 16:12:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.823 16:12:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.081 16:12:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.081 16:12:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.081 16:12:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.081 16:12:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.081 16:12:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.081 16:12:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.081 16:12:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.081 16:12:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.081 16:12:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.081 16:12:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.340 16:12:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.340 16:12:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.340 16:12:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.340 16:12:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.340 16:12:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.340 16:12:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.340 16:12:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.340 16:12:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.340 16:12:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.340 16:12:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.340 16:12:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.600 16:12:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.600 16:12:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.600 16:12:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.600 16:12:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.600 16:12:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.600 16:12:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.600 16:12:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.600 16:12:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.600 16:12:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.600 16:12:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.600 16:12:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.600 16:12:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.600 16:12:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.859 16:12:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:01.118 [2024-11-26 16:12:26.512938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.118 [2024-11-26 16:12:26.531135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.118 [2024-11-26 16:12:26.531146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.118 [2024-11-26 16:12:26.558085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.118 [2024-11-26 16:12:26.558167] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:01.118 [2024-11-26 16:12:26.558179] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:04.406 16:12:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:04.406 spdk_app_start Round 1 00:07:04.406 16:12:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:04.406 16:12:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70406 /var/tmp/spdk-nbd.sock 00:07:04.406 16:12:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70406 ']' 00:07:04.406 16:12:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.406 16:12:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.406 16:12:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.406 16:12:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.406 16:12:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.406 16:12:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.406 16:12:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:04.406 16:12:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.406 Malloc0 00:07:04.407 16:12:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.665 Malloc1 00:07:04.665 16:12:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.665 16:12:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.665 16:12:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.665 16:12:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.665 16:12:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.665 16:12:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.665 16:12:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.665 16:12:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.665 16:12:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.665 16:12:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.665 16:12:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.665 16:12:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.665 16:12:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:04.665 16:12:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.665 16:12:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.665 16:12:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:04.924 /dev/nbd0 00:07:04.924 16:12:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.924 16:12:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.924 16:12:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:04.924 16:12:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:04.924 16:12:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.924 16:12:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.924 16:12:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:04.924 16:12:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:04.924 16:12:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.924 16:12:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.924 16:12:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.924 1+0 records in 00:07:04.924 1+0 records out 00:07:04.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302298 s, 13.5 MB/s 00:07:04.924 16:12:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.924 16:12:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:04.924 16:12:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.924 16:12:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.924 16:12:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:04.924 16:12:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.924 16:12:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.924 16:12:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:05.183 /dev/nbd1 00:07:05.183 16:12:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.183 16:12:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.183 16:12:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:05.183 16:12:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:05.183 16:12:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.183 16:12:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.183 16:12:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:05.183 16:12:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:05.183 16:12:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.183 16:12:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.183 16:12:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.183 1+0 records in 00:07:05.183 1+0 records out 00:07:05.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293771 s, 13.9 MB/s 00:07:05.183 16:12:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.183 16:12:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:05.183 16:12:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.183 16:12:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.183 16:12:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:05.183 16:12:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.183 16:12:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.183 16:12:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.183 16:12:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.183 16:12:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.442 16:12:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.442 { 00:07:05.442 "nbd_device": "/dev/nbd0", 00:07:05.442 "bdev_name": "Malloc0" 00:07:05.442 }, 00:07:05.442 { 00:07:05.442 "nbd_device": "/dev/nbd1", 00:07:05.442 "bdev_name": "Malloc1" 00:07:05.442 } 00:07:05.442 ]' 00:07:05.442 16:12:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.442 { 00:07:05.442 "nbd_device": "/dev/nbd0", 00:07:05.442 "bdev_name": "Malloc0" 00:07:05.442 }, 00:07:05.442 { 00:07:05.442 "nbd_device": "/dev/nbd1", 00:07:05.442 "bdev_name": "Malloc1" 00:07:05.442 } 00:07:05.442 ]' 00:07:05.442 16:12:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.442 16:12:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.442 /dev/nbd1' 00:07:05.442 16:12:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.442 16:12:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.442 /dev/nbd1' 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:05.701 256+0 records in 00:07:05.701 256+0 records out 00:07:05.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596695 s, 176 MB/s 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.701 256+0 records in 00:07:05.701 256+0 records out 00:07:05.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021453 s, 48.9 MB/s 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.701 256+0 records in 00:07:05.701 256+0 records out 00:07:05.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267042 s, 39.3 MB/s 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.701 16:12:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.004 16:12:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.004 16:12:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.004 16:12:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.004 16:12:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.004 16:12:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.004 16:12:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.004 16:12:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.004 16:12:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.004 16:12:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.004 16:12:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.284 16:12:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.284 16:12:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.284 16:12:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.284 16:12:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.284 16:12:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.284 16:12:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.284 16:12:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.284 16:12:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.284 16:12:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.284 16:12:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.284 16:12:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.542 16:12:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.542 16:12:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.542 16:12:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.542 16:12:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.542 16:12:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.542 16:12:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.542 16:12:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:06.542 16:12:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.542 16:12:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.542 16:12:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:06.542 16:12:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:06.542 16:12:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:06.542 16:12:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:06.801 16:12:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:07.059 [2024-11-26 16:12:32.470307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.059 [2024-11-26 16:12:32.488681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.059 [2024-11-26 16:12:32.488692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.059 [2024-11-26 16:12:32.516687] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.059 [2024-11-26 16:12:32.516772] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:07.059 [2024-11-26 16:12:32.516784] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:10.342 16:12:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:10.342 spdk_app_start Round 2 00:07:10.342 16:12:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:10.342 16:12:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70406 /var/tmp/spdk-nbd.sock 00:07:10.342 16:12:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70406 ']' 00:07:10.342 16:12:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.342 16:12:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.342 16:12:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.342 16:12:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.342 16:12:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.342 16:12:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.342 16:12:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:10.342 16:12:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.342 Malloc0 00:07:10.342 16:12:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.601 Malloc1 00:07:10.601 16:12:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:10.601 16:12:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.601 16:12:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.601 16:12:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:10.601 16:12:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.601 16:12:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:10.601 16:12:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:10.601 16:12:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.601 16:12:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.601 16:12:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:10.601 16:12:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.601 16:12:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:10.601 16:12:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:10.601 16:12:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:10.601 16:12:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:10.601 16:12:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:10.859 /dev/nbd0 00:07:10.859 16:12:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:10.859 16:12:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:10.859 16:12:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:10.859 16:12:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:10.859 16:12:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.859 16:12:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.859 16:12:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:10.859 16:12:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:10.859 16:12:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.859 16:12:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.859 16:12:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:10.859 1+0 records in 00:07:10.859 1+0 records out 00:07:10.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310218 s, 13.2 MB/s 00:07:10.859 16:12:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:10.859 16:12:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:10.859 16:12:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:10.859 16:12:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.859 16:12:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:10.859 16:12:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.859 16:12:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:10.859 16:12:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:11.118 /dev/nbd1 00:07:11.118 16:12:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:11.118 16:12:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:11.118 16:12:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:11.118 16:12:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:11.118 16:12:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.118 16:12:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.118 16:12:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:11.118 16:12:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:11.118 16:12:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.118 16:12:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.118 16:12:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.118 1+0 records in 00:07:11.118 1+0 records out 00:07:11.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323456 s, 12.7 MB/s 00:07:11.118 16:12:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.118 16:12:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:11.118 16:12:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.118 16:12:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.118 16:12:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:11.118 16:12:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.118 16:12:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.118 16:12:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.118 16:12:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.118 16:12:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.685 16:12:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:11.685 { 00:07:11.685 "nbd_device": "/dev/nbd0", 00:07:11.685 "bdev_name": "Malloc0" 00:07:11.685 }, 00:07:11.685 { 00:07:11.685 "nbd_device": "/dev/nbd1", 00:07:11.685 "bdev_name": "Malloc1" 00:07:11.685 } 00:07:11.685 ]' 00:07:11.685 16:12:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:11.685 { 00:07:11.685 "nbd_device": "/dev/nbd0", 00:07:11.685 "bdev_name": "Malloc0" 00:07:11.685 }, 00:07:11.685 { 00:07:11.685 "nbd_device": "/dev/nbd1", 00:07:11.685 "bdev_name": "Malloc1" 00:07:11.685 } 00:07:11.685 ]' 00:07:11.685 16:12:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:11.686 /dev/nbd1' 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:11.686 /dev/nbd1' 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:11.686 256+0 records in 00:07:11.686 256+0 records out 00:07:11.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0095874 s, 109 MB/s 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:11.686 256+0 records in 00:07:11.686 256+0 records out 00:07:11.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237476 s, 44.2 MB/s 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:11.686 256+0 records in 00:07:11.686 256+0 records out 00:07:11.686 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226271 s, 46.3 MB/s 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.686 16:12:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:11.944 16:12:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:11.944 16:12:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:11.944 16:12:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:11.944 16:12:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.944 16:12:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.944 16:12:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:11.944 16:12:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:11.944 16:12:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.944 16:12:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.944 16:12:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:12.203 16:12:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:12.203 16:12:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:12.203 16:12:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:12.203 16:12:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.203 16:12:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.203 16:12:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:12.203 16:12:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.203 16:12:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.203 16:12:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.203 16:12:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.203 16:12:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.461 16:12:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:12.461 16:12:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:12.461 16:12:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.461 16:12:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:12.461 16:12:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:12.461 16:12:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.461 16:12:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:12.461 16:12:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:12.461 16:12:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:12.461 16:12:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:12.461 16:12:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:12.461 16:12:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:12.461 16:12:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:13.027 16:12:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:13.027 [2024-11-26 16:12:38.470884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.027 [2024-11-26 16:12:38.489874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.028 [2024-11-26 16:12:38.489886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.028 [2024-11-26 16:12:38.520488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.028 [2024-11-26 16:12:38.520596] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:13.028 [2024-11-26 16:12:38.520609] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:16.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.311 16:12:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70406 /var/tmp/spdk-nbd.sock 00:07:16.311 16:12:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70406 ']' 00:07:16.311 16:12:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.311 16:12:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.311 16:12:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.311 16:12:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.311 16:12:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:16.311 16:12:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.311 16:12:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:16.311 16:12:41 event.app_repeat -- event/event.sh@39 -- # killprocess 70406 00:07:16.311 16:12:41 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70406 ']' 00:07:16.311 16:12:41 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70406 00:07:16.311 16:12:41 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:16.311 16:12:41 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.312 16:12:41 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70406 00:07:16.312 killing process with pid 70406 00:07:16.312 16:12:41 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.312 16:12:41 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.312 16:12:41 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70406' 00:07:16.312 16:12:41 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70406 00:07:16.312 16:12:41 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70406 00:07:16.312 spdk_app_start is called in Round 0. 00:07:16.312 Shutdown signal received, stop current app iteration 00:07:16.312 Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 reinitialization... 00:07:16.312 spdk_app_start is called in Round 1. 00:07:16.312 Shutdown signal received, stop current app iteration 00:07:16.312 Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 reinitialization... 00:07:16.312 spdk_app_start is called in Round 2. 00:07:16.312 Shutdown signal received, stop current app iteration 00:07:16.312 Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 reinitialization... 00:07:16.312 spdk_app_start is called in Round 3. 00:07:16.312 Shutdown signal received, stop current app iteration 00:07:16.312 16:12:41 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:16.312 16:12:41 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:16.312 00:07:16.312 real 0m18.374s 00:07:16.312 user 0m42.358s 00:07:16.312 sys 0m2.427s 00:07:16.312 16:12:41 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.312 16:12:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:16.312 ************************************ 00:07:16.312 END TEST app_repeat 00:07:16.312 ************************************ 00:07:16.312 16:12:41 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:16.312 16:12:41 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:16.312 16:12:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.312 16:12:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.312 16:12:41 event -- common/autotest_common.sh@10 -- # set +x 00:07:16.312 ************************************ 00:07:16.312 START TEST cpu_locks 00:07:16.312 ************************************ 00:07:16.312 16:12:41 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:16.312 * Looking for test storage... 00:07:16.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:16.312 16:12:41 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.312 16:12:41 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.312 16:12:41 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.571 16:12:42 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.571 16:12:42 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:16.571 16:12:42 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.571 16:12:42 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.571 --rc genhtml_branch_coverage=1 00:07:16.571 --rc genhtml_function_coverage=1 00:07:16.571 --rc genhtml_legend=1 00:07:16.571 --rc geninfo_all_blocks=1 00:07:16.571 --rc geninfo_unexecuted_blocks=1 00:07:16.571 00:07:16.571 ' 00:07:16.571 16:12:42 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.571 --rc genhtml_branch_coverage=1 00:07:16.571 --rc genhtml_function_coverage=1 00:07:16.571 --rc genhtml_legend=1 00:07:16.571 --rc geninfo_all_blocks=1 00:07:16.571 --rc geninfo_unexecuted_blocks=1 00:07:16.571 00:07:16.571 ' 00:07:16.571 16:12:42 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.571 --rc genhtml_branch_coverage=1 00:07:16.571 --rc genhtml_function_coverage=1 00:07:16.571 --rc genhtml_legend=1 00:07:16.571 --rc geninfo_all_blocks=1 00:07:16.571 --rc geninfo_unexecuted_blocks=1 00:07:16.571 00:07:16.571 ' 00:07:16.571 16:12:42 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.571 --rc genhtml_branch_coverage=1 00:07:16.571 --rc genhtml_function_coverage=1 00:07:16.571 --rc genhtml_legend=1 00:07:16.571 --rc geninfo_all_blocks=1 00:07:16.571 --rc geninfo_unexecuted_blocks=1 00:07:16.571 00:07:16.571 ' 00:07:16.571 16:12:42 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:16.571 16:12:42 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:16.571 16:12:42 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:16.571 16:12:42 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:16.571 16:12:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.571 16:12:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.571 16:12:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.571 ************************************ 00:07:16.571 START TEST default_locks 00:07:16.571 ************************************ 00:07:16.571 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:16.571 16:12:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70841 00:07:16.571 16:12:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70841 00:07:16.571 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70841 ']' 00:07:16.571 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.571 16:12:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:16.571 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.571 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.571 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.571 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.571 [2024-11-26 16:12:42.090749] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:16.571 [2024-11-26 16:12:42.090841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70841 ] 00:07:16.831 [2024-11-26 16:12:42.232763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.831 [2024-11-26 16:12:42.252734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.831 [2024-11-26 16:12:42.287178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.831 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.831 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:16.831 16:12:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70841 00:07:16.831 16:12:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70841 00:07:16.831 16:12:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:17.400 16:12:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70841 00:07:17.400 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70841 ']' 00:07:17.400 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70841 00:07:17.400 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:17.400 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.400 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70841 00:07:17.400 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.400 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.400 killing process with pid 70841 00:07:17.400 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70841' 00:07:17.400 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70841 00:07:17.400 16:12:42 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70841 00:07:17.659 16:12:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70841 00:07:17.659 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:17.659 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70841 00:07:17.659 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:17.659 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.659 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:17.659 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.659 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70841 00:07:17.659 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70841 ']' 00:07:17.659 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.659 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.660 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.660 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.660 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.660 ERROR: process (pid: 70841) is no longer running 00:07:17.660 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70841) - No such process 00:07:17.660 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.660 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:17.660 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:17.660 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.660 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.660 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.660 16:12:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:17.660 16:12:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:17.660 16:12:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:17.660 16:12:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:17.660 00:07:17.660 real 0m1.053s 00:07:17.660 user 0m1.127s 00:07:17.660 sys 0m0.410s 00:07:17.660 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.660 16:12:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.660 ************************************ 00:07:17.660 END TEST default_locks 00:07:17.660 ************************************ 00:07:17.660 16:12:43 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:17.660 16:12:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.660 16:12:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.660 16:12:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.660 ************************************ 00:07:17.660 START TEST default_locks_via_rpc 00:07:17.660 ************************************ 00:07:17.660 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:17.660 16:12:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70880 00:07:17.660 16:12:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70880 00:07:17.660 16:12:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:17.660 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70880 ']' 00:07:17.660 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.660 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.660 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.660 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.660 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.660 [2024-11-26 16:12:43.204283] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:17.660 [2024-11-26 16:12:43.204394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70880 ] 00:07:17.919 [2024-11-26 16:12:43.348789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.919 [2024-11-26 16:12:43.367141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.919 [2024-11-26 16:12:43.403675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70880 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70880 00:07:17.919 16:12:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.487 16:12:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70880 00:07:18.487 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 70880 ']' 00:07:18.487 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 70880 00:07:18.487 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:18.488 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.488 16:12:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70880 00:07:18.488 16:12:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.488 16:12:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.488 killing process with pid 70880 00:07:18.488 16:12:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70880' 00:07:18.488 16:12:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 70880 00:07:18.488 16:12:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 70880 00:07:18.746 00:07:18.746 real 0m1.071s 00:07:18.746 user 0m1.179s 00:07:18.746 sys 0m0.394s 00:07:18.746 16:12:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.746 16:12:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.746 ************************************ 00:07:18.746 END TEST default_locks_via_rpc 00:07:18.746 ************************************ 00:07:18.746 16:12:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:18.746 16:12:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.746 16:12:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.746 16:12:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.746 ************************************ 00:07:18.746 START TEST non_locking_app_on_locked_coremask 00:07:18.746 ************************************ 00:07:18.746 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:18.746 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70918 00:07:18.746 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70918 /var/tmp/spdk.sock 00:07:18.746 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.746 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70918 ']' 00:07:18.746 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.746 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.746 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.746 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.746 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.746 [2024-11-26 16:12:44.328945] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:18.746 [2024-11-26 16:12:44.329044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70918 ] 00:07:19.005 [2024-11-26 16:12:44.475487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.005 [2024-11-26 16:12:44.495319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.005 [2024-11-26 16:12:44.529946] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.005 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.005 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:19.005 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70932 00:07:19.005 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70932 /var/tmp/spdk2.sock 00:07:19.005 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:19.005 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70932 ']' 00:07:19.005 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.005 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.005 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.005 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.005 16:12:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.264 [2024-11-26 16:12:44.713058] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:19.264 [2024-11-26 16:12:44.713157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70932 ] 00:07:19.264 [2024-11-26 16:12:44.871927] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:19.264 [2024-11-26 16:12:44.871971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.264 [2024-11-26 16:12:44.909769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.523 [2024-11-26 16:12:44.975511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.523 16:12:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.523 16:12:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:19.523 16:12:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70918 00:07:19.523 16:12:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70918 00:07:19.523 16:12:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.459 16:12:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70918 00:07:20.459 16:12:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70918 ']' 00:07:20.459 16:12:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70918 00:07:20.459 16:12:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:20.459 16:12:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.459 16:12:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70918 00:07:20.459 16:12:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.459 killing process with pid 70918 00:07:20.459 16:12:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.459 16:12:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70918' 00:07:20.459 16:12:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70918 00:07:20.459 16:12:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70918 00:07:21.027 16:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70932 00:07:21.027 16:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70932 ']' 00:07:21.027 16:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70932 00:07:21.027 16:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:21.027 16:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.027 16:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70932 00:07:21.027 16:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.027 16:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.027 killing process with pid 70932 00:07:21.027 16:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70932' 00:07:21.027 16:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70932 00:07:21.027 16:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70932 00:07:21.027 00:07:21.027 real 0m2.366s 00:07:21.027 user 0m2.706s 00:07:21.027 sys 0m0.829s 00:07:21.027 16:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.027 16:12:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.027 ************************************ 00:07:21.027 END TEST non_locking_app_on_locked_coremask 00:07:21.027 ************************************ 00:07:21.027 16:12:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:21.027 16:12:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.027 16:12:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.027 16:12:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.287 ************************************ 00:07:21.287 START TEST locking_app_on_unlocked_coremask 00:07:21.287 ************************************ 00:07:21.287 16:12:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:21.287 16:12:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70982 00:07:21.287 16:12:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:21.287 16:12:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70982 /var/tmp/spdk.sock 00:07:21.287 16:12:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70982 ']' 00:07:21.287 16:12:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.287 16:12:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.287 16:12:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.287 16:12:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.287 16:12:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.287 [2024-11-26 16:12:46.747549] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:21.287 [2024-11-26 16:12:46.747653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70982 ] 00:07:21.287 [2024-11-26 16:12:46.888568] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.287 [2024-11-26 16:12:46.888616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.287 [2024-11-26 16:12:46.907036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.547 [2024-11-26 16:12:46.943091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.547 16:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.547 16:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:21.547 16:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70990 00:07:21.547 16:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:21.547 16:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70990 /var/tmp/spdk2.sock 00:07:21.547 16:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70990 ']' 00:07:21.547 16:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.547 16:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.547 16:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.547 16:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.547 16:12:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.547 [2024-11-26 16:12:47.115181] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:21.547 [2024-11-26 16:12:47.115284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70990 ] 00:07:21.806 [2024-11-26 16:12:47.268992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.806 [2024-11-26 16:12:47.307398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.806 [2024-11-26 16:12:47.375992] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.743 16:12:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.743 16:12:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:22.743 16:12:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70990 00:07:22.743 16:12:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70990 00:07:22.743 16:12:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:23.311 16:12:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70982 00:07:23.311 16:12:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70982 ']' 00:07:23.311 16:12:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70982 00:07:23.311 16:12:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:23.311 16:12:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.311 16:12:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70982 00:07:23.311 16:12:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.311 16:12:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.311 killing process with pid 70982 00:07:23.311 16:12:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70982' 00:07:23.311 16:12:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70982 00:07:23.311 16:12:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70982 00:07:23.878 16:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70990 00:07:23.878 16:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70990 ']' 00:07:23.878 16:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70990 00:07:23.878 16:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:23.878 16:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.878 16:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70990 00:07:23.878 16:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.878 16:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.878 killing process with pid 70990 00:07:23.878 16:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70990' 00:07:23.878 16:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70990 00:07:23.878 16:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70990 00:07:24.137 00:07:24.137 real 0m2.879s 00:07:24.137 user 0m3.398s 00:07:24.137 sys 0m0.841s 00:07:24.137 16:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.137 16:12:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.137 ************************************ 00:07:24.137 END TEST locking_app_on_unlocked_coremask 00:07:24.137 ************************************ 00:07:24.137 16:12:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:24.137 16:12:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.137 16:12:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.137 16:12:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.137 ************************************ 00:07:24.137 START TEST locking_app_on_locked_coremask 00:07:24.137 ************************************ 00:07:24.137 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:24.137 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71052 00:07:24.137 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:24.137 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71052 /var/tmp/spdk.sock 00:07:24.137 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71052 ']' 00:07:24.137 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.137 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.137 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.137 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.137 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.137 [2024-11-26 16:12:49.661437] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:24.137 [2024-11-26 16:12:49.661523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71052 ] 00:07:24.396 [2024-11-26 16:12:49.804000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.396 [2024-11-26 16:12:49.824965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.396 [2024-11-26 16:12:49.859822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71060 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71060 /var/tmp/spdk2.sock 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71060 /var/tmp/spdk2.sock 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71060 /var/tmp/spdk2.sock 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71060 ']' 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.396 16:12:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.655 [2024-11-26 16:12:50.052971] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:24.655 [2024-11-26 16:12:50.053099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71060 ] 00:07:24.655 [2024-11-26 16:12:50.207506] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71052 has claimed it. 00:07:24.655 [2024-11-26 16:12:50.207585] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:25.222 ERROR: process (pid: 71060) is no longer running 00:07:25.222 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71060) - No such process 00:07:25.222 16:12:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.222 16:12:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:25.222 16:12:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:25.223 16:12:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.223 16:12:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.223 16:12:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.223 16:12:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71052 00:07:25.223 16:12:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71052 00:07:25.223 16:12:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:25.482 16:12:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71052 00:07:25.482 16:12:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71052 ']' 00:07:25.482 16:12:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71052 00:07:25.482 16:12:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:25.482 16:12:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.482 16:12:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71052 00:07:25.482 16:12:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.482 16:12:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.482 killing process with pid 71052 00:07:25.482 16:12:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71052' 00:07:25.482 16:12:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71052 00:07:25.482 16:12:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71052 00:07:25.741 00:07:25.741 real 0m1.675s 00:07:25.741 user 0m2.024s 00:07:25.741 sys 0m0.432s 00:07:25.741 16:12:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.741 16:12:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.741 ************************************ 00:07:25.741 END TEST locking_app_on_locked_coremask 00:07:25.741 ************************************ 00:07:25.741 16:12:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:25.741 16:12:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.741 16:12:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.741 16:12:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.741 ************************************ 00:07:25.741 START TEST locking_overlapped_coremask 00:07:25.741 ************************************ 00:07:25.741 16:12:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:25.741 16:12:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71100 00:07:25.741 16:12:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71100 /var/tmp/spdk.sock 00:07:25.741 16:12:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:25.741 16:12:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71100 ']' 00:07:25.741 16:12:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.741 16:12:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.741 16:12:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.741 16:12:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.741 16:12:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.001 [2024-11-26 16:12:51.394797] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:26.001 [2024-11-26 16:12:51.394906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71100 ] 00:07:26.001 [2024-11-26 16:12:51.530933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:26.001 [2024-11-26 16:12:51.552287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.001 [2024-11-26 16:12:51.552455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.001 [2024-11-26 16:12:51.552469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.001 [2024-11-26 16:12:51.604893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71118 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71118 /var/tmp/spdk2.sock 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71118 /var/tmp/spdk2.sock 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71118 /var/tmp/spdk2.sock 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71118 ']' 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.938 16:12:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.938 [2024-11-26 16:12:52.398315] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:26.938 [2024-11-26 16:12:52.398983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71118 ] 00:07:26.938 [2024-11-26 16:12:52.552282] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71100 has claimed it. 00:07:26.938 [2024-11-26 16:12:52.552337] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:27.507 ERROR: process (pid: 71118) is no longer running 00:07:27.507 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71118) - No such process 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71100 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71100 ']' 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71100 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71100 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.507 killing process with pid 71100 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71100' 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71100 00:07:27.507 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71100 00:07:27.766 00:07:27.766 real 0m1.985s 00:07:27.766 user 0m5.823s 00:07:27.766 sys 0m0.313s 00:07:27.766 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.766 16:12:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.766 ************************************ 00:07:27.766 END TEST locking_overlapped_coremask 00:07:27.766 ************************************ 00:07:27.766 16:12:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:27.766 16:12:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.766 16:12:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.766 16:12:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.766 ************************************ 00:07:27.766 START TEST locking_overlapped_coremask_via_rpc 00:07:27.766 ************************************ 00:07:27.766 16:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:27.766 16:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71158 00:07:27.766 16:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71158 /var/tmp/spdk.sock 00:07:27.766 16:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71158 ']' 00:07:27.766 16:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:27.766 16:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.766 16:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.766 16:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.766 16:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.766 16:12:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.025 [2024-11-26 16:12:53.430754] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:28.025 [2024-11-26 16:12:53.430880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71158 ] 00:07:28.025 [2024-11-26 16:12:53.571129] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:28.025 [2024-11-26 16:12:53.571190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.025 [2024-11-26 16:12:53.591212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.025 [2024-11-26 16:12:53.591339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.025 [2024-11-26 16:12:53.591368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.025 [2024-11-26 16:12:53.626315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.963 16:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.963 16:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:28.963 16:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71178 00:07:28.963 16:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:28.963 16:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71178 /var/tmp/spdk2.sock 00:07:28.963 16:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71178 ']' 00:07:28.963 16:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:28.963 16:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:28.963 16:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:28.963 16:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.963 16:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.963 [2024-11-26 16:12:54.475653] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:28.963 [2024-11-26 16:12:54.476496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71178 ] 00:07:29.222 [2024-11-26 16:12:54.639897] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:29.222 [2024-11-26 16:12:54.639955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.222 [2024-11-26 16:12:54.686480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.222 [2024-11-26 16:12:54.686541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.222 [2024-11-26 16:12:54.686542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:29.222 [2024-11-26 16:12:54.761950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.159 [2024-11-26 16:12:55.493461] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71158 has claimed it. 00:07:30.159 request: 00:07:30.159 { 00:07:30.159 "method": "framework_enable_cpumask_locks", 00:07:30.159 "req_id": 1 00:07:30.159 } 00:07:30.159 Got JSON-RPC error response 00:07:30.159 response: 00:07:30.159 { 00:07:30.159 "code": -32603, 00:07:30.159 "message": "Failed to claim CPU core: 2" 00:07:30.159 } 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71158 /var/tmp/spdk.sock 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71158 ']' 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71178 /var/tmp/spdk2.sock 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71178 ']' 00:07:30.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.159 16:12:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.418 ************************************ 00:07:30.418 END TEST locking_overlapped_coremask_via_rpc 00:07:30.418 ************************************ 00:07:30.418 16:12:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.418 16:12:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:30.418 16:12:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:30.418 16:12:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:30.418 16:12:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:30.418 16:12:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:30.418 00:07:30.418 real 0m2.683s 00:07:30.418 user 0m1.431s 00:07:30.418 sys 0m0.174s 00:07:30.418 16:12:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.418 16:12:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.677 16:12:56 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:30.677 16:12:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71158 ]] 00:07:30.677 16:12:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71158 00:07:30.677 16:12:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71158 ']' 00:07:30.677 16:12:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71158 00:07:30.677 16:12:56 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:30.677 16:12:56 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.677 16:12:56 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71158 00:07:30.677 killing process with pid 71158 00:07:30.678 16:12:56 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.678 16:12:56 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.678 16:12:56 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71158' 00:07:30.678 16:12:56 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71158 00:07:30.678 16:12:56 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71158 00:07:30.936 16:12:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71178 ]] 00:07:30.936 16:12:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71178 00:07:30.936 16:12:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71178 ']' 00:07:30.936 16:12:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71178 00:07:30.936 16:12:56 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:30.936 16:12:56 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.936 16:12:56 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71178 00:07:30.936 killing process with pid 71178 00:07:30.936 16:12:56 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:30.936 16:12:56 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:30.937 16:12:56 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71178' 00:07:30.937 16:12:56 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71178 00:07:30.937 16:12:56 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71178 00:07:30.937 16:12:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:30.937 Process with pid 71158 is not found 00:07:30.937 Process with pid 71178 is not found 00:07:30.937 16:12:56 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:30.937 16:12:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71158 ]] 00:07:30.937 16:12:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71158 00:07:30.937 16:12:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71158 ']' 00:07:30.937 16:12:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71158 00:07:30.937 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71158) - No such process 00:07:30.937 16:12:56 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71158 is not found' 00:07:30.937 16:12:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71178 ]] 00:07:30.937 16:12:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71178 00:07:30.937 16:12:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71178 ']' 00:07:30.937 16:12:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71178 00:07:30.937 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71178) - No such process 00:07:30.937 16:12:56 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71178 is not found' 00:07:30.937 16:12:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:30.937 ************************************ 00:07:30.937 END TEST cpu_locks 00:07:30.937 ************************************ 00:07:30.937 00:07:30.937 real 0m14.730s 00:07:30.937 user 0m29.787s 00:07:30.937 sys 0m4.041s 00:07:30.937 16:12:56 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.937 16:12:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.196 00:07:31.196 real 0m40.980s 00:07:31.196 user 1m24.237s 00:07:31.196 sys 0m7.100s 00:07:31.196 16:12:56 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.196 16:12:56 event -- common/autotest_common.sh@10 -- # set +x 00:07:31.196 ************************************ 00:07:31.196 END TEST event 00:07:31.196 ************************************ 00:07:31.196 16:12:56 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:31.196 16:12:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.196 16:12:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.196 16:12:56 -- common/autotest_common.sh@10 -- # set +x 00:07:31.196 ************************************ 00:07:31.196 START TEST thread 00:07:31.196 ************************************ 00:07:31.196 16:12:56 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:31.196 * Looking for test storage... 00:07:31.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:31.196 16:12:56 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:31.196 16:12:56 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:31.196 16:12:56 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:31.196 16:12:56 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:31.196 16:12:56 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.196 16:12:56 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.196 16:12:56 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.196 16:12:56 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.196 16:12:56 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.196 16:12:56 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.196 16:12:56 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.196 16:12:56 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.196 16:12:56 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.196 16:12:56 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.196 16:12:56 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.196 16:12:56 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:31.196 16:12:56 thread -- scripts/common.sh@345 -- # : 1 00:07:31.196 16:12:56 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.196 16:12:56 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.196 16:12:56 thread -- scripts/common.sh@365 -- # decimal 1 00:07:31.196 16:12:56 thread -- scripts/common.sh@353 -- # local d=1 00:07:31.196 16:12:56 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.196 16:12:56 thread -- scripts/common.sh@355 -- # echo 1 00:07:31.196 16:12:56 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.196 16:12:56 thread -- scripts/common.sh@366 -- # decimal 2 00:07:31.196 16:12:56 thread -- scripts/common.sh@353 -- # local d=2 00:07:31.196 16:12:56 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.196 16:12:56 thread -- scripts/common.sh@355 -- # echo 2 00:07:31.196 16:12:56 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.196 16:12:56 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.196 16:12:56 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.196 16:12:56 thread -- scripts/common.sh@368 -- # return 0 00:07:31.196 16:12:56 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.196 16:12:56 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:31.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.196 --rc genhtml_branch_coverage=1 00:07:31.196 --rc genhtml_function_coverage=1 00:07:31.196 --rc genhtml_legend=1 00:07:31.196 --rc geninfo_all_blocks=1 00:07:31.196 --rc geninfo_unexecuted_blocks=1 00:07:31.196 00:07:31.196 ' 00:07:31.196 16:12:56 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:31.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.196 --rc genhtml_branch_coverage=1 00:07:31.196 --rc genhtml_function_coverage=1 00:07:31.196 --rc genhtml_legend=1 00:07:31.196 --rc geninfo_all_blocks=1 00:07:31.196 --rc geninfo_unexecuted_blocks=1 00:07:31.196 00:07:31.196 ' 00:07:31.196 16:12:56 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:31.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.196 --rc genhtml_branch_coverage=1 00:07:31.196 --rc genhtml_function_coverage=1 00:07:31.196 --rc genhtml_legend=1 00:07:31.196 --rc geninfo_all_blocks=1 00:07:31.196 --rc geninfo_unexecuted_blocks=1 00:07:31.196 00:07:31.196 ' 00:07:31.196 16:12:56 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:31.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.196 --rc genhtml_branch_coverage=1 00:07:31.196 --rc genhtml_function_coverage=1 00:07:31.196 --rc genhtml_legend=1 00:07:31.196 --rc geninfo_all_blocks=1 00:07:31.196 --rc geninfo_unexecuted_blocks=1 00:07:31.196 00:07:31.196 ' 00:07:31.196 16:12:56 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:31.196 16:12:56 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:31.196 16:12:56 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.196 16:12:56 thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.196 ************************************ 00:07:31.196 START TEST thread_poller_perf 00:07:31.196 ************************************ 00:07:31.196 16:12:56 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:31.455 [2024-11-26 16:12:56.843073] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:31.455 [2024-11-26 16:12:56.843162] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71309 ] 00:07:31.455 [2024-11-26 16:12:56.985523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.455 [2024-11-26 16:12:57.003740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.455 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:32.390 [2024-11-26T16:12:58.043Z] ====================================== 00:07:32.390 [2024-11-26T16:12:58.043Z] busy:2206855442 (cyc) 00:07:32.390 [2024-11-26T16:12:58.043Z] total_run_count: 377000 00:07:32.391 [2024-11-26T16:12:58.044Z] tsc_hz: 2200000000 (cyc) 00:07:32.391 [2024-11-26T16:12:58.044Z] ====================================== 00:07:32.391 [2024-11-26T16:12:58.044Z] poller_cost: 5853 (cyc), 2660 (nsec) 00:07:32.391 00:07:32.391 real 0m1.209s 00:07:32.391 user 0m1.078s 00:07:32.391 sys 0m0.026s 00:07:32.391 16:12:58 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.649 16:12:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:32.649 ************************************ 00:07:32.649 END TEST thread_poller_perf 00:07:32.649 ************************************ 00:07:32.649 16:12:58 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:32.649 16:12:58 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:32.649 16:12:58 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.649 16:12:58 thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.649 ************************************ 00:07:32.649 START TEST thread_poller_perf 00:07:32.649 ************************************ 00:07:32.649 16:12:58 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:32.649 [2024-11-26 16:12:58.099841] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:32.649 [2024-11-26 16:12:58.099953] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71339 ] 00:07:32.649 [2024-11-26 16:12:58.243786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.649 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:32.649 [2024-11-26 16:12:58.260972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.026 [2024-11-26T16:12:59.679Z] ====================================== 00:07:34.026 [2024-11-26T16:12:59.679Z] busy:2201844510 (cyc) 00:07:34.026 [2024-11-26T16:12:59.679Z] total_run_count: 4954000 00:07:34.026 [2024-11-26T16:12:59.679Z] tsc_hz: 2200000000 (cyc) 00:07:34.026 [2024-11-26T16:12:59.679Z] ====================================== 00:07:34.026 [2024-11-26T16:12:59.679Z] poller_cost: 444 (cyc), 201 (nsec) 00:07:34.026 00:07:34.026 real 0m1.217s 00:07:34.026 user 0m1.078s 00:07:34.026 sys 0m0.032s 00:07:34.026 16:12:59 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.026 ************************************ 00:07:34.026 END TEST thread_poller_perf 00:07:34.026 ************************************ 00:07:34.026 16:12:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:34.026 16:12:59 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:34.026 ************************************ 00:07:34.026 END TEST thread 00:07:34.026 ************************************ 00:07:34.026 00:07:34.026 real 0m2.678s 00:07:34.026 user 0m2.280s 00:07:34.026 sys 0m0.186s 00:07:34.026 16:12:59 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.026 16:12:59 thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.026 16:12:59 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:34.026 16:12:59 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:34.026 16:12:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.026 16:12:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.026 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:07:34.026 ************************************ 00:07:34.026 START TEST app_cmdline 00:07:34.026 ************************************ 00:07:34.026 16:12:59 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:34.026 * Looking for test storage... 00:07:34.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:34.026 16:12:59 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:34.026 16:12:59 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:34.026 16:12:59 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:34.026 16:12:59 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.026 16:12:59 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:34.026 16:12:59 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.026 16:12:59 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:34.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.026 --rc genhtml_branch_coverage=1 00:07:34.026 --rc genhtml_function_coverage=1 00:07:34.026 --rc genhtml_legend=1 00:07:34.026 --rc geninfo_all_blocks=1 00:07:34.026 --rc geninfo_unexecuted_blocks=1 00:07:34.026 00:07:34.026 ' 00:07:34.026 16:12:59 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:34.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.026 --rc genhtml_branch_coverage=1 00:07:34.026 --rc genhtml_function_coverage=1 00:07:34.026 --rc genhtml_legend=1 00:07:34.026 --rc geninfo_all_blocks=1 00:07:34.026 --rc geninfo_unexecuted_blocks=1 00:07:34.026 00:07:34.026 ' 00:07:34.026 16:12:59 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:34.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.026 --rc genhtml_branch_coverage=1 00:07:34.026 --rc genhtml_function_coverage=1 00:07:34.026 --rc genhtml_legend=1 00:07:34.026 --rc geninfo_all_blocks=1 00:07:34.026 --rc geninfo_unexecuted_blocks=1 00:07:34.026 00:07:34.026 ' 00:07:34.026 16:12:59 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:34.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.026 --rc genhtml_branch_coverage=1 00:07:34.026 --rc genhtml_function_coverage=1 00:07:34.026 --rc genhtml_legend=1 00:07:34.026 --rc geninfo_all_blocks=1 00:07:34.026 --rc geninfo_unexecuted_blocks=1 00:07:34.026 00:07:34.026 ' 00:07:34.026 16:12:59 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:34.026 16:12:59 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71421 00:07:34.026 16:12:59 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71421 00:07:34.026 16:12:59 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:34.026 16:12:59 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71421 ']' 00:07:34.026 16:12:59 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.026 16:12:59 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.026 16:12:59 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.026 16:12:59 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.026 16:12:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:34.026 [2024-11-26 16:12:59.638727] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:34.026 [2024-11-26 16:12:59.638837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71421 ] 00:07:34.326 [2024-11-26 16:12:59.786454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.326 [2024-11-26 16:12:59.807885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.326 [2024-11-26 16:12:59.842696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.624 16:12:59 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.624 16:12:59 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:34.624 16:12:59 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:34.624 { 00:07:34.624 "version": "SPDK v25.01-pre git sha1 2a91567e4", 00:07:34.624 "fields": { 00:07:34.624 "major": 25, 00:07:34.624 "minor": 1, 00:07:34.624 "patch": 0, 00:07:34.624 "suffix": "-pre", 00:07:34.624 "commit": "2a91567e4" 00:07:34.624 } 00:07:34.624 } 00:07:34.896 16:13:00 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:34.896 16:13:00 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:34.896 16:13:00 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:34.896 16:13:00 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:34.896 16:13:00 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:34.896 16:13:00 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:34.896 16:13:00 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:34.896 16:13:00 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.896 16:13:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:34.896 16:13:00 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.896 16:13:00 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:34.896 16:13:00 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:34.896 16:13:00 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.896 16:13:00 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:34.896 16:13:00 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.896 16:13:00 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.896 16:13:00 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.896 16:13:00 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.896 16:13:00 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.896 16:13:00 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.896 16:13:00 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.896 16:13:00 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:34.896 16:13:00 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:34.896 16:13:00 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:35.155 request: 00:07:35.155 { 00:07:35.155 "method": "env_dpdk_get_mem_stats", 00:07:35.155 "req_id": 1 00:07:35.155 } 00:07:35.155 Got JSON-RPC error response 00:07:35.155 response: 00:07:35.155 { 00:07:35.155 "code": -32601, 00:07:35.155 "message": "Method not found" 00:07:35.155 } 00:07:35.155 16:13:00 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:35.155 16:13:00 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:35.155 16:13:00 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:35.155 16:13:00 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:35.155 16:13:00 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71421 00:07:35.155 16:13:00 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71421 ']' 00:07:35.155 16:13:00 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71421 00:07:35.155 16:13:00 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:35.155 16:13:00 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.155 16:13:00 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71421 00:07:35.155 16:13:00 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.155 16:13:00 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.155 killing process with pid 71421 00:07:35.155 16:13:00 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71421' 00:07:35.155 16:13:00 app_cmdline -- common/autotest_common.sh@973 -- # kill 71421 00:07:35.155 16:13:00 app_cmdline -- common/autotest_common.sh@978 -- # wait 71421 00:07:35.413 00:07:35.413 real 0m1.479s 00:07:35.413 user 0m2.001s 00:07:35.413 sys 0m0.380s 00:07:35.413 16:13:00 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.413 16:13:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:35.413 ************************************ 00:07:35.413 END TEST app_cmdline 00:07:35.413 ************************************ 00:07:35.413 16:13:00 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:35.413 16:13:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.413 16:13:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.413 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:07:35.413 ************************************ 00:07:35.413 START TEST version 00:07:35.413 ************************************ 00:07:35.413 16:13:00 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:35.413 * Looking for test storage... 00:07:35.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:35.413 16:13:00 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:35.413 16:13:00 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:35.413 16:13:00 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:35.671 16:13:01 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:35.671 16:13:01 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.671 16:13:01 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.671 16:13:01 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.671 16:13:01 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.671 16:13:01 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.671 16:13:01 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.671 16:13:01 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.671 16:13:01 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.671 16:13:01 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.671 16:13:01 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.671 16:13:01 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.671 16:13:01 version -- scripts/common.sh@344 -- # case "$op" in 00:07:35.671 16:13:01 version -- scripts/common.sh@345 -- # : 1 00:07:35.671 16:13:01 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.671 16:13:01 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.671 16:13:01 version -- scripts/common.sh@365 -- # decimal 1 00:07:35.671 16:13:01 version -- scripts/common.sh@353 -- # local d=1 00:07:35.671 16:13:01 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.671 16:13:01 version -- scripts/common.sh@355 -- # echo 1 00:07:35.671 16:13:01 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.671 16:13:01 version -- scripts/common.sh@366 -- # decimal 2 00:07:35.671 16:13:01 version -- scripts/common.sh@353 -- # local d=2 00:07:35.671 16:13:01 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.671 16:13:01 version -- scripts/common.sh@355 -- # echo 2 00:07:35.671 16:13:01 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.671 16:13:01 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.671 16:13:01 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.671 16:13:01 version -- scripts/common.sh@368 -- # return 0 00:07:35.671 16:13:01 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.671 16:13:01 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:35.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.671 --rc genhtml_branch_coverage=1 00:07:35.671 --rc genhtml_function_coverage=1 00:07:35.671 --rc genhtml_legend=1 00:07:35.671 --rc geninfo_all_blocks=1 00:07:35.671 --rc geninfo_unexecuted_blocks=1 00:07:35.671 00:07:35.671 ' 00:07:35.671 16:13:01 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:35.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.671 --rc genhtml_branch_coverage=1 00:07:35.671 --rc genhtml_function_coverage=1 00:07:35.671 --rc genhtml_legend=1 00:07:35.671 --rc geninfo_all_blocks=1 00:07:35.671 --rc geninfo_unexecuted_blocks=1 00:07:35.671 00:07:35.671 ' 00:07:35.671 16:13:01 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:35.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.671 --rc genhtml_branch_coverage=1 00:07:35.671 --rc genhtml_function_coverage=1 00:07:35.671 --rc genhtml_legend=1 00:07:35.671 --rc geninfo_all_blocks=1 00:07:35.671 --rc geninfo_unexecuted_blocks=1 00:07:35.671 00:07:35.671 ' 00:07:35.671 16:13:01 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:35.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.671 --rc genhtml_branch_coverage=1 00:07:35.671 --rc genhtml_function_coverage=1 00:07:35.671 --rc genhtml_legend=1 00:07:35.671 --rc geninfo_all_blocks=1 00:07:35.671 --rc geninfo_unexecuted_blocks=1 00:07:35.671 00:07:35.671 ' 00:07:35.671 16:13:01 version -- app/version.sh@17 -- # get_header_version major 00:07:35.671 16:13:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.671 16:13:01 version -- app/version.sh@14 -- # cut -f2 00:07:35.671 16:13:01 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.671 16:13:01 version -- app/version.sh@17 -- # major=25 00:07:35.671 16:13:01 version -- app/version.sh@18 -- # get_header_version minor 00:07:35.671 16:13:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.671 16:13:01 version -- app/version.sh@14 -- # cut -f2 00:07:35.671 16:13:01 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.671 16:13:01 version -- app/version.sh@18 -- # minor=1 00:07:35.671 16:13:01 version -- app/version.sh@19 -- # get_header_version patch 00:07:35.671 16:13:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.671 16:13:01 version -- app/version.sh@14 -- # cut -f2 00:07:35.671 16:13:01 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.671 16:13:01 version -- app/version.sh@19 -- # patch=0 00:07:35.671 16:13:01 version -- app/version.sh@20 -- # get_header_version suffix 00:07:35.671 16:13:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:35.671 16:13:01 version -- app/version.sh@14 -- # cut -f2 00:07:35.671 16:13:01 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.671 16:13:01 version -- app/version.sh@20 -- # suffix=-pre 00:07:35.671 16:13:01 version -- app/version.sh@22 -- # version=25.1 00:07:35.671 16:13:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:35.671 16:13:01 version -- app/version.sh@28 -- # version=25.1rc0 00:07:35.671 16:13:01 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:35.671 16:13:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:35.671 16:13:01 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:35.671 16:13:01 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:35.671 00:07:35.671 real 0m0.249s 00:07:35.671 user 0m0.168s 00:07:35.671 sys 0m0.117s 00:07:35.671 16:13:01 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.671 16:13:01 version -- common/autotest_common.sh@10 -- # set +x 00:07:35.671 ************************************ 00:07:35.671 END TEST version 00:07:35.672 ************************************ 00:07:35.672 16:13:01 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:35.672 16:13:01 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:35.672 16:13:01 -- spdk/autotest.sh@194 -- # uname -s 00:07:35.672 16:13:01 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:35.672 16:13:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:35.672 16:13:01 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:35.672 16:13:01 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:35.672 16:13:01 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:35.672 16:13:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.672 16:13:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.672 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:07:35.672 ************************************ 00:07:35.672 START TEST spdk_dd 00:07:35.672 ************************************ 00:07:35.672 16:13:01 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:35.672 * Looking for test storage... 00:07:35.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:35.672 16:13:01 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:35.672 16:13:01 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:07:35.672 16:13:01 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:35.978 16:13:01 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:35.978 16:13:01 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.978 16:13:01 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.978 16:13:01 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.978 16:13:01 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.978 16:13:01 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.978 16:13:01 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.978 16:13:01 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:35.979 16:13:01 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.979 16:13:01 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:35.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.979 --rc genhtml_branch_coverage=1 00:07:35.979 --rc genhtml_function_coverage=1 00:07:35.979 --rc genhtml_legend=1 00:07:35.979 --rc geninfo_all_blocks=1 00:07:35.979 --rc geninfo_unexecuted_blocks=1 00:07:35.979 00:07:35.979 ' 00:07:35.979 16:13:01 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:35.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.979 --rc genhtml_branch_coverage=1 00:07:35.979 --rc genhtml_function_coverage=1 00:07:35.979 --rc genhtml_legend=1 00:07:35.979 --rc geninfo_all_blocks=1 00:07:35.979 --rc geninfo_unexecuted_blocks=1 00:07:35.979 00:07:35.979 ' 00:07:35.979 16:13:01 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:35.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.979 --rc genhtml_branch_coverage=1 00:07:35.979 --rc genhtml_function_coverage=1 00:07:35.979 --rc genhtml_legend=1 00:07:35.979 --rc geninfo_all_blocks=1 00:07:35.979 --rc geninfo_unexecuted_blocks=1 00:07:35.979 00:07:35.979 ' 00:07:35.979 16:13:01 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:35.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.979 --rc genhtml_branch_coverage=1 00:07:35.979 --rc genhtml_function_coverage=1 00:07:35.979 --rc genhtml_legend=1 00:07:35.979 --rc geninfo_all_blocks=1 00:07:35.979 --rc geninfo_unexecuted_blocks=1 00:07:35.979 00:07:35.979 ' 00:07:35.979 16:13:01 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.979 16:13:01 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.979 16:13:01 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.979 16:13:01 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.979 16:13:01 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.979 16:13:01 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:35.979 16:13:01 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.979 16:13:01 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:36.238 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:36.238 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:36.238 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:36.238 16:13:01 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:36.238 16:13:01 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:36.238 16:13:01 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:36.238 16:13:01 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:36.238 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:07:36.239 16:13:01 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:36.240 16:13:01 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:36.240 16:13:01 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:36.240 * spdk_dd linked to liburing 00:07:36.240 16:13:01 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:36.240 16:13:01 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:36.240 16:13:01 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:07:36.240 16:13:01 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:36.240 16:13:01 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:36.240 16:13:01 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:36.240 16:13:01 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:36.240 16:13:01 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:36.240 16:13:01 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:36.240 16:13:01 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:36.240 16:13:01 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.240 16:13:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:36.240 ************************************ 00:07:36.240 START TEST spdk_dd_basic_rw 00:07:36.240 ************************************ 00:07:36.240 16:13:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:36.499 * Looking for test storage... 00:07:36.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:36.499 16:13:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:36.499 16:13:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:07:36.499 16:13:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:36.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.499 --rc genhtml_branch_coverage=1 00:07:36.499 --rc genhtml_function_coverage=1 00:07:36.499 --rc genhtml_legend=1 00:07:36.499 --rc geninfo_all_blocks=1 00:07:36.499 --rc geninfo_unexecuted_blocks=1 00:07:36.499 00:07:36.499 ' 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:36.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.499 --rc genhtml_branch_coverage=1 00:07:36.499 --rc genhtml_function_coverage=1 00:07:36.499 --rc genhtml_legend=1 00:07:36.499 --rc geninfo_all_blocks=1 00:07:36.499 --rc geninfo_unexecuted_blocks=1 00:07:36.499 00:07:36.499 ' 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:36.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.499 --rc genhtml_branch_coverage=1 00:07:36.499 --rc genhtml_function_coverage=1 00:07:36.499 --rc genhtml_legend=1 00:07:36.499 --rc geninfo_all_blocks=1 00:07:36.499 --rc geninfo_unexecuted_blocks=1 00:07:36.499 00:07:36.499 ' 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:36.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.499 --rc genhtml_branch_coverage=1 00:07:36.499 --rc genhtml_function_coverage=1 00:07:36.499 --rc genhtml_legend=1 00:07:36.499 --rc geninfo_all_blocks=1 00:07:36.499 --rc geninfo_unexecuted_blocks=1 00:07:36.499 00:07:36.499 ' 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:36.499 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:36.759 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:36.759 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.760 ************************************ 00:07:36.760 START TEST dd_bs_lt_native_bs 00:07:36.760 ************************************ 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.760 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:36.760 { 00:07:36.760 "subsystems": [ 00:07:36.760 { 00:07:36.760 "subsystem": "bdev", 00:07:36.760 "config": [ 00:07:36.760 { 00:07:36.760 "params": { 00:07:36.760 "trtype": "pcie", 00:07:36.760 "traddr": "0000:00:10.0", 00:07:36.760 "name": "Nvme0" 00:07:36.760 }, 00:07:36.760 "method": "bdev_nvme_attach_controller" 00:07:36.760 }, 00:07:36.760 { 00:07:36.760 "method": "bdev_wait_for_examine" 00:07:36.760 } 00:07:36.760 ] 00:07:36.760 } 00:07:36.760 ] 00:07:36.760 } 00:07:36.760 [2024-11-26 16:13:02.348101] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:36.760 [2024-11-26 16:13:02.348206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71760 ] 00:07:37.017 [2024-11-26 16:13:02.497928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.017 [2024-11-26 16:13:02.521820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.017 [2024-11-26 16:13:02.554727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.017 [2024-11-26 16:13:02.645876] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:37.017 [2024-11-26 16:13:02.645943] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.275 [2024-11-26 16:13:02.718074] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.275 00:07:37.275 real 0m0.492s 00:07:37.275 user 0m0.333s 00:07:37.275 sys 0m0.116s 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:37.275 ************************************ 00:07:37.275 END TEST dd_bs_lt_native_bs 00:07:37.275 ************************************ 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.275 ************************************ 00:07:37.275 START TEST dd_rw 00:07:37.275 ************************************ 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:37.275 16:13:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.841 16:13:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:37.841 16:13:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:37.841 16:13:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:37.841 16:13:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.841 [2024-11-26 16:13:03.443964] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:37.841 [2024-11-26 16:13:03.444071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71796 ] 00:07:37.841 { 00:07:37.841 "subsystems": [ 00:07:37.841 { 00:07:37.841 "subsystem": "bdev", 00:07:37.841 "config": [ 00:07:37.841 { 00:07:37.841 "params": { 00:07:37.841 "trtype": "pcie", 00:07:37.841 "traddr": "0000:00:10.0", 00:07:37.841 "name": "Nvme0" 00:07:37.841 }, 00:07:37.841 "method": "bdev_nvme_attach_controller" 00:07:37.841 }, 00:07:37.841 { 00:07:37.841 "method": "bdev_wait_for_examine" 00:07:37.841 } 00:07:37.841 ] 00:07:37.841 } 00:07:37.842 ] 00:07:37.842 } 00:07:38.100 [2024-11-26 16:13:03.588796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.100 [2024-11-26 16:13:03.606890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.100 [2024-11-26 16:13:03.633796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.100  [2024-11-26T16:13:04.012Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:38.359 00:07:38.359 16:13:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:38.359 16:13:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:38.359 16:13:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:38.359 16:13:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.359 [2024-11-26 16:13:03.877566] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:38.359 [2024-11-26 16:13:03.877670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71804 ] 00:07:38.359 { 00:07:38.359 "subsystems": [ 00:07:38.359 { 00:07:38.359 "subsystem": "bdev", 00:07:38.359 "config": [ 00:07:38.359 { 00:07:38.359 "params": { 00:07:38.359 "trtype": "pcie", 00:07:38.359 "traddr": "0000:00:10.0", 00:07:38.359 "name": "Nvme0" 00:07:38.359 }, 00:07:38.359 "method": "bdev_nvme_attach_controller" 00:07:38.359 }, 00:07:38.360 { 00:07:38.360 "method": "bdev_wait_for_examine" 00:07:38.360 } 00:07:38.360 ] 00:07:38.360 } 00:07:38.360 ] 00:07:38.360 } 00:07:38.619 [2024-11-26 16:13:04.020462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.619 [2024-11-26 16:13:04.037993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.619 [2024-11-26 16:13:04.065342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.619  [2024-11-26T16:13:04.272Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:38.619 00:07:38.878 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.878 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:38.878 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:38.878 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:38.878 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:38.878 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:38.878 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:38.878 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:38.878 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:38.878 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:38.878 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.878 { 00:07:38.878 "subsystems": [ 00:07:38.878 { 00:07:38.878 "subsystem": "bdev", 00:07:38.878 "config": [ 00:07:38.878 { 00:07:38.878 "params": { 00:07:38.878 "trtype": "pcie", 00:07:38.878 "traddr": "0000:00:10.0", 00:07:38.878 "name": "Nvme0" 00:07:38.878 }, 00:07:38.878 "method": "bdev_nvme_attach_controller" 00:07:38.878 }, 00:07:38.878 { 00:07:38.878 "method": "bdev_wait_for_examine" 00:07:38.878 } 00:07:38.878 ] 00:07:38.878 } 00:07:38.878 ] 00:07:38.878 } 00:07:38.878 [2024-11-26 16:13:04.326976] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:38.878 [2024-11-26 16:13:04.327078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71820 ] 00:07:38.878 [2024-11-26 16:13:04.472304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.878 [2024-11-26 16:13:04.489787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.878 [2024-11-26 16:13:04.516836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.138  [2024-11-26T16:13:04.791Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:39.138 00:07:39.138 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:39.138 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:39.138 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:39.138 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:39.138 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:39.138 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:39.138 16:13:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.705 16:13:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:39.705 16:13:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:39.705 16:13:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:39.705 16:13:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.965 [2024-11-26 16:13:05.362558] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:39.965 [2024-11-26 16:13:05.362673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71839 ] 00:07:39.965 { 00:07:39.965 "subsystems": [ 00:07:39.965 { 00:07:39.965 "subsystem": "bdev", 00:07:39.965 "config": [ 00:07:39.965 { 00:07:39.965 "params": { 00:07:39.965 "trtype": "pcie", 00:07:39.965 "traddr": "0000:00:10.0", 00:07:39.965 "name": "Nvme0" 00:07:39.965 }, 00:07:39.965 "method": "bdev_nvme_attach_controller" 00:07:39.965 }, 00:07:39.965 { 00:07:39.965 "method": "bdev_wait_for_examine" 00:07:39.965 } 00:07:39.965 ] 00:07:39.965 } 00:07:39.965 ] 00:07:39.965 } 00:07:39.965 [2024-11-26 16:13:05.504113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.965 [2024-11-26 16:13:05.521790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.965 [2024-11-26 16:13:05.548582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.226  [2024-11-26T16:13:05.879Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:40.226 00:07:40.226 16:13:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:40.226 16:13:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:40.226 16:13:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:40.226 16:13:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.226 { 00:07:40.226 "subsystems": [ 00:07:40.226 { 00:07:40.226 "subsystem": "bdev", 00:07:40.226 "config": [ 00:07:40.226 { 00:07:40.226 "params": { 00:07:40.226 "trtype": "pcie", 00:07:40.226 "traddr": "0000:00:10.0", 00:07:40.226 "name": "Nvme0" 00:07:40.226 }, 00:07:40.226 "method": "bdev_nvme_attach_controller" 00:07:40.226 }, 00:07:40.226 { 00:07:40.226 "method": "bdev_wait_for_examine" 00:07:40.226 } 00:07:40.226 ] 00:07:40.226 } 00:07:40.226 ] 00:07:40.226 } 00:07:40.226 [2024-11-26 16:13:05.808811] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:40.226 [2024-11-26 16:13:05.808922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71852 ] 00:07:40.486 [2024-11-26 16:13:05.955101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.486 [2024-11-26 16:13:05.974722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.486 [2024-11-26 16:13:06.002266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.486  [2024-11-26T16:13:06.398Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:40.745 00:07:40.745 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.745 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:40.745 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:40.745 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:40.745 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:40.745 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:40.745 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:40.745 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:40.745 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:40.745 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:40.745 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.745 [2024-11-26 16:13:06.265255] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:40.745 [2024-11-26 16:13:06.265378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71868 ] 00:07:40.745 { 00:07:40.745 "subsystems": [ 00:07:40.745 { 00:07:40.745 "subsystem": "bdev", 00:07:40.745 "config": [ 00:07:40.745 { 00:07:40.745 "params": { 00:07:40.745 "trtype": "pcie", 00:07:40.745 "traddr": "0000:00:10.0", 00:07:40.745 "name": "Nvme0" 00:07:40.745 }, 00:07:40.745 "method": "bdev_nvme_attach_controller" 00:07:40.745 }, 00:07:40.745 { 00:07:40.745 "method": "bdev_wait_for_examine" 00:07:40.745 } 00:07:40.745 ] 00:07:40.745 } 00:07:40.745 ] 00:07:40.745 } 00:07:41.005 [2024-11-26 16:13:06.408811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.005 [2024-11-26 16:13:06.427723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.005 [2024-11-26 16:13:06.455266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.005  [2024-11-26T16:13:06.917Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:41.264 00:07:41.264 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:41.264 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:41.264 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:41.264 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:41.264 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:41.264 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:41.264 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:41.264 16:13:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.833 16:13:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:41.833 16:13:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:41.833 16:13:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.833 16:13:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.833 [2024-11-26 16:13:07.234557] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:41.833 [2024-11-26 16:13:07.234654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71887 ] 00:07:41.833 { 00:07:41.833 "subsystems": [ 00:07:41.833 { 00:07:41.833 "subsystem": "bdev", 00:07:41.833 "config": [ 00:07:41.833 { 00:07:41.833 "params": { 00:07:41.833 "trtype": "pcie", 00:07:41.833 "traddr": "0000:00:10.0", 00:07:41.833 "name": "Nvme0" 00:07:41.833 }, 00:07:41.833 "method": "bdev_nvme_attach_controller" 00:07:41.833 }, 00:07:41.833 { 00:07:41.833 "method": "bdev_wait_for_examine" 00:07:41.833 } 00:07:41.833 ] 00:07:41.833 } 00:07:41.833 ] 00:07:41.833 } 00:07:41.833 [2024-11-26 16:13:07.379460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.833 [2024-11-26 16:13:07.398647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.833 [2024-11-26 16:13:07.426102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.092  [2024-11-26T16:13:07.745Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:42.092 00:07:42.092 16:13:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:42.092 16:13:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:42.092 16:13:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:42.092 16:13:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.092 [2024-11-26 16:13:07.676587] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:42.092 [2024-11-26 16:13:07.676680] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71900 ] 00:07:42.092 { 00:07:42.092 "subsystems": [ 00:07:42.092 { 00:07:42.092 "subsystem": "bdev", 00:07:42.092 "config": [ 00:07:42.092 { 00:07:42.092 "params": { 00:07:42.092 "trtype": "pcie", 00:07:42.092 "traddr": "0000:00:10.0", 00:07:42.092 "name": "Nvme0" 00:07:42.092 }, 00:07:42.092 "method": "bdev_nvme_attach_controller" 00:07:42.092 }, 00:07:42.092 { 00:07:42.092 "method": "bdev_wait_for_examine" 00:07:42.092 } 00:07:42.092 ] 00:07:42.092 } 00:07:42.092 ] 00:07:42.092 } 00:07:42.351 [2024-11-26 16:13:07.820308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.351 [2024-11-26 16:13:07.838734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.351 [2024-11-26 16:13:07.866882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.351  [2024-11-26T16:13:08.263Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:42.610 00:07:42.610 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.610 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:42.610 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:42.610 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:42.610 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:42.610 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:42.610 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:42.610 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:42.610 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:42.610 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:42.610 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.610 [2024-11-26 16:13:08.132539] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:42.610 [2024-11-26 16:13:08.132642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71916 ] 00:07:42.610 { 00:07:42.610 "subsystems": [ 00:07:42.610 { 00:07:42.610 "subsystem": "bdev", 00:07:42.610 "config": [ 00:07:42.610 { 00:07:42.610 "params": { 00:07:42.610 "trtype": "pcie", 00:07:42.610 "traddr": "0000:00:10.0", 00:07:42.610 "name": "Nvme0" 00:07:42.610 }, 00:07:42.610 "method": "bdev_nvme_attach_controller" 00:07:42.610 }, 00:07:42.610 { 00:07:42.610 "method": "bdev_wait_for_examine" 00:07:42.610 } 00:07:42.610 ] 00:07:42.610 } 00:07:42.610 ] 00:07:42.610 } 00:07:42.870 [2024-11-26 16:13:08.277364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.870 [2024-11-26 16:13:08.296993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.870 [2024-11-26 16:13:08.326105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.870  [2024-11-26T16:13:08.782Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:43.129 00:07:43.129 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:43.129 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:43.129 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:43.129 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:43.129 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:43.129 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:43.129 16:13:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.698 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:43.698 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:43.698 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.698 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.698 [2024-11-26 16:13:09.101738] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:43.698 [2024-11-26 16:13:09.101835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71935 ] 00:07:43.698 { 00:07:43.698 "subsystems": [ 00:07:43.698 { 00:07:43.698 "subsystem": "bdev", 00:07:43.698 "config": [ 00:07:43.698 { 00:07:43.698 "params": { 00:07:43.698 "trtype": "pcie", 00:07:43.698 "traddr": "0000:00:10.0", 00:07:43.698 "name": "Nvme0" 00:07:43.698 }, 00:07:43.698 "method": "bdev_nvme_attach_controller" 00:07:43.698 }, 00:07:43.698 { 00:07:43.698 "method": "bdev_wait_for_examine" 00:07:43.698 } 00:07:43.698 ] 00:07:43.698 } 00:07:43.698 ] 00:07:43.698 } 00:07:43.698 [2024-11-26 16:13:09.245086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.698 [2024-11-26 16:13:09.265172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.698 [2024-11-26 16:13:09.294508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.957  [2024-11-26T16:13:09.610Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:43.957 00:07:43.957 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:43.957 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:43.957 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.957 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.957 [2024-11-26 16:13:09.535198] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:43.957 [2024-11-26 16:13:09.535294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71943 ] 00:07:43.957 { 00:07:43.957 "subsystems": [ 00:07:43.957 { 00:07:43.957 "subsystem": "bdev", 00:07:43.957 "config": [ 00:07:43.957 { 00:07:43.957 "params": { 00:07:43.957 "trtype": "pcie", 00:07:43.957 "traddr": "0000:00:10.0", 00:07:43.958 "name": "Nvme0" 00:07:43.958 }, 00:07:43.958 "method": "bdev_nvme_attach_controller" 00:07:43.958 }, 00:07:43.958 { 00:07:43.958 "method": "bdev_wait_for_examine" 00:07:43.958 } 00:07:43.958 ] 00:07:43.958 } 00:07:43.958 ] 00:07:43.958 } 00:07:44.216 [2024-11-26 16:13:09.680811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.216 [2024-11-26 16:13:09.700974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.216 [2024-11-26 16:13:09.728591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.216  [2024-11-26T16:13:10.127Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:44.474 00:07:44.474 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.474 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:44.474 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:44.474 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:44.474 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:44.474 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:44.474 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:44.474 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:44.474 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:44.474 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:44.474 16:13:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:44.474 [2024-11-26 16:13:09.984375] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:44.474 [2024-11-26 16:13:09.984473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71958 ] 00:07:44.474 { 00:07:44.474 "subsystems": [ 00:07:44.474 { 00:07:44.474 "subsystem": "bdev", 00:07:44.474 "config": [ 00:07:44.474 { 00:07:44.474 "params": { 00:07:44.474 "trtype": "pcie", 00:07:44.474 "traddr": "0000:00:10.0", 00:07:44.474 "name": "Nvme0" 00:07:44.474 }, 00:07:44.474 "method": "bdev_nvme_attach_controller" 00:07:44.474 }, 00:07:44.474 { 00:07:44.474 "method": "bdev_wait_for_examine" 00:07:44.474 } 00:07:44.474 ] 00:07:44.474 } 00:07:44.474 ] 00:07:44.474 } 00:07:44.734 [2024-11-26 16:13:10.129809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.734 [2024-11-26 16:13:10.148766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.734 [2024-11-26 16:13:10.178036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.734  [2024-11-26T16:13:10.387Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:44.734 00:07:44.734 16:13:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:44.734 16:13:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:44.734 16:13:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:44.734 16:13:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:44.734 16:13:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:44.734 16:13:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:44.734 16:13:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:44.734 16:13:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.302 16:13:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:45.302 16:13:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:45.302 16:13:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:45.302 16:13:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.302 [2024-11-26 16:13:10.818472] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:45.302 [2024-11-26 16:13:10.818609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71977 ] 00:07:45.302 { 00:07:45.302 "subsystems": [ 00:07:45.302 { 00:07:45.302 "subsystem": "bdev", 00:07:45.302 "config": [ 00:07:45.302 { 00:07:45.302 "params": { 00:07:45.302 "trtype": "pcie", 00:07:45.302 "traddr": "0000:00:10.0", 00:07:45.302 "name": "Nvme0" 00:07:45.302 }, 00:07:45.302 "method": "bdev_nvme_attach_controller" 00:07:45.303 }, 00:07:45.303 { 00:07:45.303 "method": "bdev_wait_for_examine" 00:07:45.303 } 00:07:45.303 ] 00:07:45.303 } 00:07:45.303 ] 00:07:45.303 } 00:07:45.562 [2024-11-26 16:13:10.964756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.562 [2024-11-26 16:13:10.982537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.562 [2024-11-26 16:13:11.009345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.562  [2024-11-26T16:13:11.215Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:45.562 00:07:45.562 16:13:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:45.562 16:13:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:45.562 16:13:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:45.562 16:13:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.821 [2024-11-26 16:13:11.255256] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:45.821 [2024-11-26 16:13:11.255375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71991 ] 00:07:45.821 { 00:07:45.821 "subsystems": [ 00:07:45.821 { 00:07:45.822 "subsystem": "bdev", 00:07:45.822 "config": [ 00:07:45.822 { 00:07:45.822 "params": { 00:07:45.822 "trtype": "pcie", 00:07:45.822 "traddr": "0000:00:10.0", 00:07:45.822 "name": "Nvme0" 00:07:45.822 }, 00:07:45.822 "method": "bdev_nvme_attach_controller" 00:07:45.822 }, 00:07:45.822 { 00:07:45.822 "method": "bdev_wait_for_examine" 00:07:45.822 } 00:07:45.822 ] 00:07:45.822 } 00:07:45.822 ] 00:07:45.822 } 00:07:45.822 [2024-11-26 16:13:11.399781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.822 [2024-11-26 16:13:11.417192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.822 [2024-11-26 16:13:11.444000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.081  [2024-11-26T16:13:11.734Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:46.081 00:07:46.081 16:13:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.081 16:13:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:46.081 16:13:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:46.081 16:13:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:46.081 16:13:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:46.081 16:13:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:46.081 16:13:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:46.081 16:13:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:46.081 16:13:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:46.081 16:13:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:46.081 16:13:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.081 [2024-11-26 16:13:11.697794] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:46.081 [2024-11-26 16:13:11.697904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72001 ] 00:07:46.081 { 00:07:46.081 "subsystems": [ 00:07:46.081 { 00:07:46.081 "subsystem": "bdev", 00:07:46.081 "config": [ 00:07:46.081 { 00:07:46.081 "params": { 00:07:46.081 "trtype": "pcie", 00:07:46.081 "traddr": "0000:00:10.0", 00:07:46.081 "name": "Nvme0" 00:07:46.081 }, 00:07:46.081 "method": "bdev_nvme_attach_controller" 00:07:46.081 }, 00:07:46.081 { 00:07:46.081 "method": "bdev_wait_for_examine" 00:07:46.081 } 00:07:46.081 ] 00:07:46.081 } 00:07:46.081 ] 00:07:46.081 } 00:07:46.341 [2024-11-26 16:13:11.846099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.341 [2024-11-26 16:13:11.867169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.341 [2024-11-26 16:13:11.899167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.341  [2024-11-26T16:13:12.253Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:46.600 00:07:46.600 16:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:46.600 16:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:46.600 16:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:46.600 16:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:46.600 16:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:46.600 16:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:46.600 16:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.168 16:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:47.168 16:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:47.168 16:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:47.168 16:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.168 [2024-11-26 16:13:12.584986] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:47.168 [2024-11-26 16:13:12.585097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72020 ] 00:07:47.168 { 00:07:47.168 "subsystems": [ 00:07:47.168 { 00:07:47.168 "subsystem": "bdev", 00:07:47.168 "config": [ 00:07:47.168 { 00:07:47.168 "params": { 00:07:47.168 "trtype": "pcie", 00:07:47.168 "traddr": "0000:00:10.0", 00:07:47.168 "name": "Nvme0" 00:07:47.168 }, 00:07:47.168 "method": "bdev_nvme_attach_controller" 00:07:47.168 }, 00:07:47.168 { 00:07:47.168 "method": "bdev_wait_for_examine" 00:07:47.168 } 00:07:47.168 ] 00:07:47.168 } 00:07:47.168 ] 00:07:47.168 } 00:07:47.168 [2024-11-26 16:13:12.734558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.168 [2024-11-26 16:13:12.755770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.168 [2024-11-26 16:13:12.784250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.427  [2024-11-26T16:13:13.080Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:47.427 00:07:47.427 16:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:47.428 16:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:47.428 16:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:47.428 16:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.428 [2024-11-26 16:13:13.044767] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:47.428 [2024-11-26 16:13:13.044904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72033 ] 00:07:47.428 { 00:07:47.428 "subsystems": [ 00:07:47.428 { 00:07:47.428 "subsystem": "bdev", 00:07:47.428 "config": [ 00:07:47.428 { 00:07:47.428 "params": { 00:07:47.428 "trtype": "pcie", 00:07:47.428 "traddr": "0000:00:10.0", 00:07:47.428 "name": "Nvme0" 00:07:47.428 }, 00:07:47.428 "method": "bdev_nvme_attach_controller" 00:07:47.428 }, 00:07:47.428 { 00:07:47.428 "method": "bdev_wait_for_examine" 00:07:47.428 } 00:07:47.428 ] 00:07:47.428 } 00:07:47.428 ] 00:07:47.428 } 00:07:47.686 [2024-11-26 16:13:13.189720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.687 [2024-11-26 16:13:13.208284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.687 [2024-11-26 16:13:13.235691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.687  [2024-11-26T16:13:13.598Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:47.945 00:07:47.945 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:47.945 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:47.946 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:47.946 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:47.946 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:47.946 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:47.946 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:47.946 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:47.946 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:47.946 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:47.946 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.946 [2024-11-26 16:13:13.495537] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:47.946 [2024-11-26 16:13:13.495642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72049 ] 00:07:47.946 { 00:07:47.946 "subsystems": [ 00:07:47.946 { 00:07:47.946 "subsystem": "bdev", 00:07:47.946 "config": [ 00:07:47.946 { 00:07:47.946 "params": { 00:07:47.946 "trtype": "pcie", 00:07:47.946 "traddr": "0000:00:10.0", 00:07:47.946 "name": "Nvme0" 00:07:47.946 }, 00:07:47.946 "method": "bdev_nvme_attach_controller" 00:07:47.946 }, 00:07:47.946 { 00:07:47.946 "method": "bdev_wait_for_examine" 00:07:47.946 } 00:07:47.946 ] 00:07:47.946 } 00:07:47.946 ] 00:07:47.946 } 00:07:48.205 [2024-11-26 16:13:13.642798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.205 [2024-11-26 16:13:13.661628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.205 [2024-11-26 16:13:13.689195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.205  [2024-11-26T16:13:14.117Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:48.464 00:07:48.464 00:07:48.464 real 0m11.074s 00:07:48.464 user 0m8.185s 00:07:48.464 sys 0m3.407s 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.464 ************************************ 00:07:48.464 END TEST dd_rw 00:07:48.464 ************************************ 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.464 ************************************ 00:07:48.464 START TEST dd_rw_offset 00:07:48.464 ************************************ 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=vcerv9yp4t1ozk14d4x7adkt8jdixwaydr4wjq2j20s4u92jq0qyc83biqkdsvunw8l06s7pgxv4sqlq78kuxd7ewjepwr52i2bppormo8lo4jv7lclnsa20ze8cincp96w4kn7nsw3ak78l8zk7ysyzlyrsbdq77m30qblxjiys4rxv92tv27nua21kvya0ycgriff5j5bjyu703sdc7bst1ecixmyd7uk70m79bsoiopkrsr6bldvteht843wrj38jgb6a3xkxylwkui3i4wfiar92w4wl1fgd4sd5hxppgl74nw97tctywlqs9lxqz4s3f5z22lxo7hecfc8jij3ahfka2kkjf3nxr64a8zgtvu9r09ztclqax2x2xach174ohvwtmj671sromvi7svih35cyo8kjvj2t522pe33of2lj6u9w2aqtpz4ses5adu045cpsi4f7f4borzxorva2p2cwv20lpqr7s9w514xsi82x0pvtvjixfm9lpqpxdk9j8n79dqouyl4zg42lpnlpmo3zl7kgukd92y4kgh3d46tzo4dcy4cxt3wtdv9honl5dtrj96qph8fvkd3nc2jc5zp9un8dxg55eokf9cgwsnj3kky8uc2dn9ddpk912huds42hw4ea06z2uaohcsd5hfus9b19ccbfq69214nz9xfmef9n774tsrocggqx53zwisxmcp0v5gf45g4e958mxns0gk27820pvuzvym0yt0z6a7c8ak66r1rse7omvjaxqhruzmp8xk1k8f15369qqkczns6e9ih0cqkmrndvmf5ihwoijudgq4qeptzlkqampue3tl9lqh7w53dd8zheawo157ez4w69kuu1uwqcukcbwh8q3iwf5vwtnhzrc32qgyb3f55p13ujylax2y53lmx5wki4teo3ncxjasl4nqezoqsta2knz1ebjvx66l9si67c12vv64ws6v54hao4637vs4ez9dakv9oelrqch36mkyzm4nadbdy8xilg58roz4czhz7sw700ce0i3o2n4r6v2i3c5ilu4704hq1dvrcxanm1lhf9jzgh8qh16gnmjaq8e5hyl9rpdf80wbvjyav7wklnpagp0q1mfny7n27r44dyc9mspserx82lzld6qui8a4xgibwsgf6u9rygslvjjufm7316sbvine094pnbqkceb8a0bed7poh6tv9xf4xda0xx9ftm84scekddo4uvk0heun6g8qnj7kslwamciokk6f3af8hrtsctun1t2m9y7345ah0qqxbkuocsr79vtnzer6o7ql22dibu0som8sj8abxvaocjvznss61r0v5fdont2579vte3lnp9z8c5ih10etkk8e02pcjgmdtnk8hrn8zxms4gxlcue73jo5ifeyutwn2zr412agdhi3obmblze4r9vb10lxkdt98f47s9re3i2dd1e50uc6z6fpucql5rm5flr6yo8mef8yq0lqteup45jqtsgrpchnd3zlol0i36fxhk0p4d483igt727b52x93wnp08l7tpy3do3wtgn240mhy9x8c8dtm5cr0i2zpy31819ivcv8tb9wmelw0e24kqhodigj4n79m65cfbq8sztvd6pd4lbbx4nqm6x5ewysemdu7rp90znr9m2d40zinsxuimahggk32kw8l5qnydm8ypntfk33a9yzilz4oo6ffodroxso0kg73uumniaye1nisspbh2akoahzqse3d313e1ob3hcgu8s2k0r8pwpivuzr6lycmvfdqtwk6umukfzz3v1pn91pyk06kodavko5l5dm3faw190cht1ih69xbla04al6ounum0u91q2zdpu95ila6123d0wvmhexuw9y55kfge7fs2uuoxvl5iok0p3s1ygdyaymbpi9j8qrnnl4joa4eq5dq8s8680mozjrzb1m87c8toqaqts3rdfbre7fvft104v3j50lda3navcxg1oxmjrf7zx6er2h8by1hfktpyvuwb2fcw39j8sfz8v73ygmudf5a6j53jz9dhjo8snmrmv90pvb89vpwn8v59plz4kvt6dprjulh7orniipd24td4x7jyha8bbjceb8k6rstt5f13ty8co037ica7n21lu629fi46mdxjchlxklji1ou665kut4ocmj8vz0am7n20x3v8vt60gkowzdwcbt4eu1owdm3jlgtpv8gkgu6qftilja3gsfxazg85bnit3r6m6ts6ujg4wa2dpfoa4qnwus2mqrzf8lnxqull8aw0lze2sdthsvlvo5vf7piisptlvcigy9w9wdkq2zpoo6tan0dgwfn3i8sokwfqa2ipyold1xtz81s6acwicszq6rxyd21jh7h2u5i506xb4557rs2e2lb2uv5z8vtdtjzedpukp7nrr3br5m1e30u8caiaalqcethailz3sizuhru51mkxz4yoqhvtw2burej38461bltr2j1vptc1r3xiz23pzm95jtaupo0p9ipwtainkb1dubjdkairpany6tbkk1643huzntntshiftgdfq00iw411haqer98i0n1u3wix2g3x0d82tf9s0nmpvo7xjul06mr8xw1zz9vj37ozuyj0t0kt3r5q80aqrkppl714r9pqpv37q5t6vmbzl7su5wwg7eccegpbwhlahvf1f8a9u0kledx7wmwzftdvp1cxdjdu7pqe6geeu7of1pds8wx47n4peb2ogpqlvh72owt7drzenujtclmiep0jbfafcu7yu4oemn2o0fndn161p9oj6r5ukhzmgejuez7mkhb0wq2wqgzrzlwozl3c2s6yykkybq0xnhxno5n8bfdc59i35nwibtc7igd3lkccmp7ftk3dqzttc7nolqyad0jkbq1vqy9p8q5axvespv9t437mip49b1t9gicz202yefewkjp3p95lbjhjr8a1oxgvf2k0wr5n32siue2cifiabkyrjfu6typm50w29idgpshdkyn9uums7vvhcl2zkxobnq1d6y7ntm32hadypjt3ejrngeevs2firzf8k8p5u21oe5roaa8c1ec75c5lgsvc6rz8915g5y5a2ep40q38ewfjnjhyn24vvfbhpv1i5rj5qjksgu3j8nkle970huwelp6u1t5a22q9qrgx8t2t7r9q4o0wo8urkav4xo0aznl2h3agyx2u449x7lr3zomhukqvq8klvgll1dmllrx2r9igum5tl8lan2dshu2guo454ud09s1vrgs7w8oj2mx0n1yj9olz4u6q8uaklhek4wly3bz99q0r5iq4x8lvf8scqxw0jqosv3zgx9072lcc1rcih964l4embtnuqdet2xtrryox7p6xrijn86ehul7nl8qczdj26yos8uh1z0w8tjrjnhcrb09cov2p3j8bz10ow0ebsq7hmlv1wxhkjzauu7yh1ot5o1obgacwdbc8i4jhpia8kiiqp566hmnrqekeu4150glfx1tapw7pzu3l9a2780jtpse812msq2xw6mtddpmad4s4d6aj6bshbcuwt6q4z5iio2fuvv10phc72d92vmktkkd56a8gjryo0nt2dqkoj2iy2jbzbt7l7s4d5d6hqop2q5vob8g46jhpm7bsx5up5573d4iuokbqf06wc7srqji7i4m5503fjejtp3451fit8i6kmd8xjqobnmrati9bdh2vkf4tkcrdbqa7lf8dvs1pk884lhs4j8mq6pyqp4120hvcvvmxgiuhg0cmg778q20dmhbwwb8klp8ccfgeaizkgo5hij7e4nfr68b1ujqk6840ws1cxk7ecm7b4q30p05wncr4t0lroejido2fm64978ubyegtwa5rjnhu5jqhdmaz5spmn8nc2ftlb0iouzm4bofdhsdvvxoemkpx15dtulrmb1j9va4smd30m200ae6axulwe1ujwlu07uo7zud7vkbnacz1h9i0bg305so9ziasmpyqntubovzo5y7gnorqvsu9b7awa429uxif0n45ymybuak5do2vzllivjx01mf1mnlu 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:48.464 16:13:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:48.464 [2024-11-26 16:13:14.051273] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:48.464 [2024-11-26 16:13:14.051562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72079 ] 00:07:48.464 { 00:07:48.464 "subsystems": [ 00:07:48.464 { 00:07:48.464 "subsystem": "bdev", 00:07:48.464 "config": [ 00:07:48.464 { 00:07:48.464 "params": { 00:07:48.464 "trtype": "pcie", 00:07:48.464 "traddr": "0000:00:10.0", 00:07:48.464 "name": "Nvme0" 00:07:48.464 }, 00:07:48.464 "method": "bdev_nvme_attach_controller" 00:07:48.464 }, 00:07:48.464 { 00:07:48.464 "method": "bdev_wait_for_examine" 00:07:48.464 } 00:07:48.464 ] 00:07:48.464 } 00:07:48.464 ] 00:07:48.464 } 00:07:48.723 [2024-11-26 16:13:14.194741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.723 [2024-11-26 16:13:14.213293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.723 [2024-11-26 16:13:14.240167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.723  [2024-11-26T16:13:14.635Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:48.982 00:07:48.982 16:13:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:48.982 16:13:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:48.982 16:13:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:48.982 16:13:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:48.982 [2024-11-26 16:13:14.492835] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:48.982 [2024-11-26 16:13:14.492961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72093 ] 00:07:48.982 { 00:07:48.982 "subsystems": [ 00:07:48.982 { 00:07:48.982 "subsystem": "bdev", 00:07:48.982 "config": [ 00:07:48.982 { 00:07:48.982 "params": { 00:07:48.982 "trtype": "pcie", 00:07:48.982 "traddr": "0000:00:10.0", 00:07:48.982 "name": "Nvme0" 00:07:48.982 }, 00:07:48.982 "method": "bdev_nvme_attach_controller" 00:07:48.982 }, 00:07:48.982 { 00:07:48.982 "method": "bdev_wait_for_examine" 00:07:48.982 } 00:07:48.982 ] 00:07:48.982 } 00:07:48.982 ] 00:07:48.982 } 00:07:49.242 [2024-11-26 16:13:14.636585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.242 [2024-11-26 16:13:14.655388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.242 [2024-11-26 16:13:14.683373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.242  [2024-11-26T16:13:14.895Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:49.242 00:07:49.502 16:13:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:49.503 16:13:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ vcerv9yp4t1ozk14d4x7adkt8jdixwaydr4wjq2j20s4u92jq0qyc83biqkdsvunw8l06s7pgxv4sqlq78kuxd7ewjepwr52i2bppormo8lo4jv7lclnsa20ze8cincp96w4kn7nsw3ak78l8zk7ysyzlyrsbdq77m30qblxjiys4rxv92tv27nua21kvya0ycgriff5j5bjyu703sdc7bst1ecixmyd7uk70m79bsoiopkrsr6bldvteht843wrj38jgb6a3xkxylwkui3i4wfiar92w4wl1fgd4sd5hxppgl74nw97tctywlqs9lxqz4s3f5z22lxo7hecfc8jij3ahfka2kkjf3nxr64a8zgtvu9r09ztclqax2x2xach174ohvwtmj671sromvi7svih35cyo8kjvj2t522pe33of2lj6u9w2aqtpz4ses5adu045cpsi4f7f4borzxorva2p2cwv20lpqr7s9w514xsi82x0pvtvjixfm9lpqpxdk9j8n79dqouyl4zg42lpnlpmo3zl7kgukd92y4kgh3d46tzo4dcy4cxt3wtdv9honl5dtrj96qph8fvkd3nc2jc5zp9un8dxg55eokf9cgwsnj3kky8uc2dn9ddpk912huds42hw4ea06z2uaohcsd5hfus9b19ccbfq69214nz9xfmef9n774tsrocggqx53zwisxmcp0v5gf45g4e958mxns0gk27820pvuzvym0yt0z6a7c8ak66r1rse7omvjaxqhruzmp8xk1k8f15369qqkczns6e9ih0cqkmrndvmf5ihwoijudgq4qeptzlkqampue3tl9lqh7w53dd8zheawo157ez4w69kuu1uwqcukcbwh8q3iwf5vwtnhzrc32qgyb3f55p13ujylax2y53lmx5wki4teo3ncxjasl4nqezoqsta2knz1ebjvx66l9si67c12vv64ws6v54hao4637vs4ez9dakv9oelrqch36mkyzm4nadbdy8xilg58roz4czhz7sw700ce0i3o2n4r6v2i3c5ilu4704hq1dvrcxanm1lhf9jzgh8qh16gnmjaq8e5hyl9rpdf80wbvjyav7wklnpagp0q1mfny7n27r44dyc9mspserx82lzld6qui8a4xgibwsgf6u9rygslvjjufm7316sbvine094pnbqkceb8a0bed7poh6tv9xf4xda0xx9ftm84scekddo4uvk0heun6g8qnj7kslwamciokk6f3af8hrtsctun1t2m9y7345ah0qqxbkuocsr79vtnzer6o7ql22dibu0som8sj8abxvaocjvznss61r0v5fdont2579vte3lnp9z8c5ih10etkk8e02pcjgmdtnk8hrn8zxms4gxlcue73jo5ifeyutwn2zr412agdhi3obmblze4r9vb10lxkdt98f47s9re3i2dd1e50uc6z6fpucql5rm5flr6yo8mef8yq0lqteup45jqtsgrpchnd3zlol0i36fxhk0p4d483igt727b52x93wnp08l7tpy3do3wtgn240mhy9x8c8dtm5cr0i2zpy31819ivcv8tb9wmelw0e24kqhodigj4n79m65cfbq8sztvd6pd4lbbx4nqm6x5ewysemdu7rp90znr9m2d40zinsxuimahggk32kw8l5qnydm8ypntfk33a9yzilz4oo6ffodroxso0kg73uumniaye1nisspbh2akoahzqse3d313e1ob3hcgu8s2k0r8pwpivuzr6lycmvfdqtwk6umukfzz3v1pn91pyk06kodavko5l5dm3faw190cht1ih69xbla04al6ounum0u91q2zdpu95ila6123d0wvmhexuw9y55kfge7fs2uuoxvl5iok0p3s1ygdyaymbpi9j8qrnnl4joa4eq5dq8s8680mozjrzb1m87c8toqaqts3rdfbre7fvft104v3j50lda3navcxg1oxmjrf7zx6er2h8by1hfktpyvuwb2fcw39j8sfz8v73ygmudf5a6j53jz9dhjo8snmrmv90pvb89vpwn8v59plz4kvt6dprjulh7orniipd24td4x7jyha8bbjceb8k6rstt5f13ty8co037ica7n21lu629fi46mdxjchlxklji1ou665kut4ocmj8vz0am7n20x3v8vt60gkowzdwcbt4eu1owdm3jlgtpv8gkgu6qftilja3gsfxazg85bnit3r6m6ts6ujg4wa2dpfoa4qnwus2mqrzf8lnxqull8aw0lze2sdthsvlvo5vf7piisptlvcigy9w9wdkq2zpoo6tan0dgwfn3i8sokwfqa2ipyold1xtz81s6acwicszq6rxyd21jh7h2u5i506xb4557rs2e2lb2uv5z8vtdtjzedpukp7nrr3br5m1e30u8caiaalqcethailz3sizuhru51mkxz4yoqhvtw2burej38461bltr2j1vptc1r3xiz23pzm95jtaupo0p9ipwtainkb1dubjdkairpany6tbkk1643huzntntshiftgdfq00iw411haqer98i0n1u3wix2g3x0d82tf9s0nmpvo7xjul06mr8xw1zz9vj37ozuyj0t0kt3r5q80aqrkppl714r9pqpv37q5t6vmbzl7su5wwg7eccegpbwhlahvf1f8a9u0kledx7wmwzftdvp1cxdjdu7pqe6geeu7of1pds8wx47n4peb2ogpqlvh72owt7drzenujtclmiep0jbfafcu7yu4oemn2o0fndn161p9oj6r5ukhzmgejuez7mkhb0wq2wqgzrzlwozl3c2s6yykkybq0xnhxno5n8bfdc59i35nwibtc7igd3lkccmp7ftk3dqzttc7nolqyad0jkbq1vqy9p8q5axvespv9t437mip49b1t9gicz202yefewkjp3p95lbjhjr8a1oxgvf2k0wr5n32siue2cifiabkyrjfu6typm50w29idgpshdkyn9uums7vvhcl2zkxobnq1d6y7ntm32hadypjt3ejrngeevs2firzf8k8p5u21oe5roaa8c1ec75c5lgsvc6rz8915g5y5a2ep40q38ewfjnjhyn24vvfbhpv1i5rj5qjksgu3j8nkle970huwelp6u1t5a22q9qrgx8t2t7r9q4o0wo8urkav4xo0aznl2h3agyx2u449x7lr3zomhukqvq8klvgll1dmllrx2r9igum5tl8lan2dshu2guo454ud09s1vrgs7w8oj2mx0n1yj9olz4u6q8uaklhek4wly3bz99q0r5iq4x8lvf8scqxw0jqosv3zgx9072lcc1rcih964l4embtnuqdet2xtrryox7p6xrijn86ehul7nl8qczdj26yos8uh1z0w8tjrjnhcrb09cov2p3j8bz10ow0ebsq7hmlv1wxhkjzauu7yh1ot5o1obgacwdbc8i4jhpia8kiiqp566hmnrqekeu4150glfx1tapw7pzu3l9a2780jtpse812msq2xw6mtddpmad4s4d6aj6bshbcuwt6q4z5iio2fuvv10phc72d92vmktkkd56a8gjryo0nt2dqkoj2iy2jbzbt7l7s4d5d6hqop2q5vob8g46jhpm7bsx5up5573d4iuokbqf06wc7srqji7i4m5503fjejtp3451fit8i6kmd8xjqobnmrati9bdh2vkf4tkcrdbqa7lf8dvs1pk884lhs4j8mq6pyqp4120hvcvvmxgiuhg0cmg778q20dmhbwwb8klp8ccfgeaizkgo5hij7e4nfr68b1ujqk6840ws1cxk7ecm7b4q30p05wncr4t0lroejido2fm64978ubyegtwa5rjnhu5jqhdmaz5spmn8nc2ftlb0iouzm4bofdhsdvvxoemkpx15dtulrmb1j9va4smd30m200ae6axulwe1ujwlu07uo7zud7vkbnacz1h9i0bg305so9ziasmpyqntubovzo5y7gnorqvsu9b7awa429uxif0n45ymybuak5do2vzllivjx01mf1mnlu == \v\c\e\r\v\9\y\p\4\t\1\o\z\k\1\4\d\4\x\7\a\d\k\t\8\j\d\i\x\w\a\y\d\r\4\w\j\q\2\j\2\0\s\4\u\9\2\j\q\0\q\y\c\8\3\b\i\q\k\d\s\v\u\n\w\8\l\0\6\s\7\p\g\x\v\4\s\q\l\q\7\8\k\u\x\d\7\e\w\j\e\p\w\r\5\2\i\2\b\p\p\o\r\m\o\8\l\o\4\j\v\7\l\c\l\n\s\a\2\0\z\e\8\c\i\n\c\p\9\6\w\4\k\n\7\n\s\w\3\a\k\7\8\l\8\z\k\7\y\s\y\z\l\y\r\s\b\d\q\7\7\m\3\0\q\b\l\x\j\i\y\s\4\r\x\v\9\2\t\v\2\7\n\u\a\2\1\k\v\y\a\0\y\c\g\r\i\f\f\5\j\5\b\j\y\u\7\0\3\s\d\c\7\b\s\t\1\e\c\i\x\m\y\d\7\u\k\7\0\m\7\9\b\s\o\i\o\p\k\r\s\r\6\b\l\d\v\t\e\h\t\8\4\3\w\r\j\3\8\j\g\b\6\a\3\x\k\x\y\l\w\k\u\i\3\i\4\w\f\i\a\r\9\2\w\4\w\l\1\f\g\d\4\s\d\5\h\x\p\p\g\l\7\4\n\w\9\7\t\c\t\y\w\l\q\s\9\l\x\q\z\4\s\3\f\5\z\2\2\l\x\o\7\h\e\c\f\c\8\j\i\j\3\a\h\f\k\a\2\k\k\j\f\3\n\x\r\6\4\a\8\z\g\t\v\u\9\r\0\9\z\t\c\l\q\a\x\2\x\2\x\a\c\h\1\7\4\o\h\v\w\t\m\j\6\7\1\s\r\o\m\v\i\7\s\v\i\h\3\5\c\y\o\8\k\j\v\j\2\t\5\2\2\p\e\3\3\o\f\2\l\j\6\u\9\w\2\a\q\t\p\z\4\s\e\s\5\a\d\u\0\4\5\c\p\s\i\4\f\7\f\4\b\o\r\z\x\o\r\v\a\2\p\2\c\w\v\2\0\l\p\q\r\7\s\9\w\5\1\4\x\s\i\8\2\x\0\p\v\t\v\j\i\x\f\m\9\l\p\q\p\x\d\k\9\j\8\n\7\9\d\q\o\u\y\l\4\z\g\4\2\l\p\n\l\p\m\o\3\z\l\7\k\g\u\k\d\9\2\y\4\k\g\h\3\d\4\6\t\z\o\4\d\c\y\4\c\x\t\3\w\t\d\v\9\h\o\n\l\5\d\t\r\j\9\6\q\p\h\8\f\v\k\d\3\n\c\2\j\c\5\z\p\9\u\n\8\d\x\g\5\5\e\o\k\f\9\c\g\w\s\n\j\3\k\k\y\8\u\c\2\d\n\9\d\d\p\k\9\1\2\h\u\d\s\4\2\h\w\4\e\a\0\6\z\2\u\a\o\h\c\s\d\5\h\f\u\s\9\b\1\9\c\c\b\f\q\6\9\2\1\4\n\z\9\x\f\m\e\f\9\n\7\7\4\t\s\r\o\c\g\g\q\x\5\3\z\w\i\s\x\m\c\p\0\v\5\g\f\4\5\g\4\e\9\5\8\m\x\n\s\0\g\k\2\7\8\2\0\p\v\u\z\v\y\m\0\y\t\0\z\6\a\7\c\8\a\k\6\6\r\1\r\s\e\7\o\m\v\j\a\x\q\h\r\u\z\m\p\8\x\k\1\k\8\f\1\5\3\6\9\q\q\k\c\z\n\s\6\e\9\i\h\0\c\q\k\m\r\n\d\v\m\f\5\i\h\w\o\i\j\u\d\g\q\4\q\e\p\t\z\l\k\q\a\m\p\u\e\3\t\l\9\l\q\h\7\w\5\3\d\d\8\z\h\e\a\w\o\1\5\7\e\z\4\w\6\9\k\u\u\1\u\w\q\c\u\k\c\b\w\h\8\q\3\i\w\f\5\v\w\t\n\h\z\r\c\3\2\q\g\y\b\3\f\5\5\p\1\3\u\j\y\l\a\x\2\y\5\3\l\m\x\5\w\k\i\4\t\e\o\3\n\c\x\j\a\s\l\4\n\q\e\z\o\q\s\t\a\2\k\n\z\1\e\b\j\v\x\6\6\l\9\s\i\6\7\c\1\2\v\v\6\4\w\s\6\v\5\4\h\a\o\4\6\3\7\v\s\4\e\z\9\d\a\k\v\9\o\e\l\r\q\c\h\3\6\m\k\y\z\m\4\n\a\d\b\d\y\8\x\i\l\g\5\8\r\o\z\4\c\z\h\z\7\s\w\7\0\0\c\e\0\i\3\o\2\n\4\r\6\v\2\i\3\c\5\i\l\u\4\7\0\4\h\q\1\d\v\r\c\x\a\n\m\1\l\h\f\9\j\z\g\h\8\q\h\1\6\g\n\m\j\a\q\8\e\5\h\y\l\9\r\p\d\f\8\0\w\b\v\j\y\a\v\7\w\k\l\n\p\a\g\p\0\q\1\m\f\n\y\7\n\2\7\r\4\4\d\y\c\9\m\s\p\s\e\r\x\8\2\l\z\l\d\6\q\u\i\8\a\4\x\g\i\b\w\s\g\f\6\u\9\r\y\g\s\l\v\j\j\u\f\m\7\3\1\6\s\b\v\i\n\e\0\9\4\p\n\b\q\k\c\e\b\8\a\0\b\e\d\7\p\o\h\6\t\v\9\x\f\4\x\d\a\0\x\x\9\f\t\m\8\4\s\c\e\k\d\d\o\4\u\v\k\0\h\e\u\n\6\g\8\q\n\j\7\k\s\l\w\a\m\c\i\o\k\k\6\f\3\a\f\8\h\r\t\s\c\t\u\n\1\t\2\m\9\y\7\3\4\5\a\h\0\q\q\x\b\k\u\o\c\s\r\7\9\v\t\n\z\e\r\6\o\7\q\l\2\2\d\i\b\u\0\s\o\m\8\s\j\8\a\b\x\v\a\o\c\j\v\z\n\s\s\6\1\r\0\v\5\f\d\o\n\t\2\5\7\9\v\t\e\3\l\n\p\9\z\8\c\5\i\h\1\0\e\t\k\k\8\e\0\2\p\c\j\g\m\d\t\n\k\8\h\r\n\8\z\x\m\s\4\g\x\l\c\u\e\7\3\j\o\5\i\f\e\y\u\t\w\n\2\z\r\4\1\2\a\g\d\h\i\3\o\b\m\b\l\z\e\4\r\9\v\b\1\0\l\x\k\d\t\9\8\f\4\7\s\9\r\e\3\i\2\d\d\1\e\5\0\u\c\6\z\6\f\p\u\c\q\l\5\r\m\5\f\l\r\6\y\o\8\m\e\f\8\y\q\0\l\q\t\e\u\p\4\5\j\q\t\s\g\r\p\c\h\n\d\3\z\l\o\l\0\i\3\6\f\x\h\k\0\p\4\d\4\8\3\i\g\t\7\2\7\b\5\2\x\9\3\w\n\p\0\8\l\7\t\p\y\3\d\o\3\w\t\g\n\2\4\0\m\h\y\9\x\8\c\8\d\t\m\5\c\r\0\i\2\z\p\y\3\1\8\1\9\i\v\c\v\8\t\b\9\w\m\e\l\w\0\e\2\4\k\q\h\o\d\i\g\j\4\n\7\9\m\6\5\c\f\b\q\8\s\z\t\v\d\6\p\d\4\l\b\b\x\4\n\q\m\6\x\5\e\w\y\s\e\m\d\u\7\r\p\9\0\z\n\r\9\m\2\d\4\0\z\i\n\s\x\u\i\m\a\h\g\g\k\3\2\k\w\8\l\5\q\n\y\d\m\8\y\p\n\t\f\k\3\3\a\9\y\z\i\l\z\4\o\o\6\f\f\o\d\r\o\x\s\o\0\k\g\7\3\u\u\m\n\i\a\y\e\1\n\i\s\s\p\b\h\2\a\k\o\a\h\z\q\s\e\3\d\3\1\3\e\1\o\b\3\h\c\g\u\8\s\2\k\0\r\8\p\w\p\i\v\u\z\r\6\l\y\c\m\v\f\d\q\t\w\k\6\u\m\u\k\f\z\z\3\v\1\p\n\9\1\p\y\k\0\6\k\o\d\a\v\k\o\5\l\5\d\m\3\f\a\w\1\9\0\c\h\t\1\i\h\6\9\x\b\l\a\0\4\a\l\6\o\u\n\u\m\0\u\9\1\q\2\z\d\p\u\9\5\i\l\a\6\1\2\3\d\0\w\v\m\h\e\x\u\w\9\y\5\5\k\f\g\e\7\f\s\2\u\u\o\x\v\l\5\i\o\k\0\p\3\s\1\y\g\d\y\a\y\m\b\p\i\9\j\8\q\r\n\n\l\4\j\o\a\4\e\q\5\d\q\8\s\8\6\8\0\m\o\z\j\r\z\b\1\m\8\7\c\8\t\o\q\a\q\t\s\3\r\d\f\b\r\e\7\f\v\f\t\1\0\4\v\3\j\5\0\l\d\a\3\n\a\v\c\x\g\1\o\x\m\j\r\f\7\z\x\6\e\r\2\h\8\b\y\1\h\f\k\t\p\y\v\u\w\b\2\f\c\w\3\9\j\8\s\f\z\8\v\7\3\y\g\m\u\d\f\5\a\6\j\5\3\j\z\9\d\h\j\o\8\s\n\m\r\m\v\9\0\p\v\b\8\9\v\p\w\n\8\v\5\9\p\l\z\4\k\v\t\6\d\p\r\j\u\l\h\7\o\r\n\i\i\p\d\2\4\t\d\4\x\7\j\y\h\a\8\b\b\j\c\e\b\8\k\6\r\s\t\t\5\f\1\3\t\y\8\c\o\0\3\7\i\c\a\7\n\2\1\l\u\6\2\9\f\i\4\6\m\d\x\j\c\h\l\x\k\l\j\i\1\o\u\6\6\5\k\u\t\4\o\c\m\j\8\v\z\0\a\m\7\n\2\0\x\3\v\8\v\t\6\0\g\k\o\w\z\d\w\c\b\t\4\e\u\1\o\w\d\m\3\j\l\g\t\p\v\8\g\k\g\u\6\q\f\t\i\l\j\a\3\g\s\f\x\a\z\g\8\5\b\n\i\t\3\r\6\m\6\t\s\6\u\j\g\4\w\a\2\d\p\f\o\a\4\q\n\w\u\s\2\m\q\r\z\f\8\l\n\x\q\u\l\l\8\a\w\0\l\z\e\2\s\d\t\h\s\v\l\v\o\5\v\f\7\p\i\i\s\p\t\l\v\c\i\g\y\9\w\9\w\d\k\q\2\z\p\o\o\6\t\a\n\0\d\g\w\f\n\3\i\8\s\o\k\w\f\q\a\2\i\p\y\o\l\d\1\x\t\z\8\1\s\6\a\c\w\i\c\s\z\q\6\r\x\y\d\2\1\j\h\7\h\2\u\5\i\5\0\6\x\b\4\5\5\7\r\s\2\e\2\l\b\2\u\v\5\z\8\v\t\d\t\j\z\e\d\p\u\k\p\7\n\r\r\3\b\r\5\m\1\e\3\0\u\8\c\a\i\a\a\l\q\c\e\t\h\a\i\l\z\3\s\i\z\u\h\r\u\5\1\m\k\x\z\4\y\o\q\h\v\t\w\2\b\u\r\e\j\3\8\4\6\1\b\l\t\r\2\j\1\v\p\t\c\1\r\3\x\i\z\2\3\p\z\m\9\5\j\t\a\u\p\o\0\p\9\i\p\w\t\a\i\n\k\b\1\d\u\b\j\d\k\a\i\r\p\a\n\y\6\t\b\k\k\1\6\4\3\h\u\z\n\t\n\t\s\h\i\f\t\g\d\f\q\0\0\i\w\4\1\1\h\a\q\e\r\9\8\i\0\n\1\u\3\w\i\x\2\g\3\x\0\d\8\2\t\f\9\s\0\n\m\p\v\o\7\x\j\u\l\0\6\m\r\8\x\w\1\z\z\9\v\j\3\7\o\z\u\y\j\0\t\0\k\t\3\r\5\q\8\0\a\q\r\k\p\p\l\7\1\4\r\9\p\q\p\v\3\7\q\5\t\6\v\m\b\z\l\7\s\u\5\w\w\g\7\e\c\c\e\g\p\b\w\h\l\a\h\v\f\1\f\8\a\9\u\0\k\l\e\d\x\7\w\m\w\z\f\t\d\v\p\1\c\x\d\j\d\u\7\p\q\e\6\g\e\e\u\7\o\f\1\p\d\s\8\w\x\4\7\n\4\p\e\b\2\o\g\p\q\l\v\h\7\2\o\w\t\7\d\r\z\e\n\u\j\t\c\l\m\i\e\p\0\j\b\f\a\f\c\u\7\y\u\4\o\e\m\n\2\o\0\f\n\d\n\1\6\1\p\9\o\j\6\r\5\u\k\h\z\m\g\e\j\u\e\z\7\m\k\h\b\0\w\q\2\w\q\g\z\r\z\l\w\o\z\l\3\c\2\s\6\y\y\k\k\y\b\q\0\x\n\h\x\n\o\5\n\8\b\f\d\c\5\9\i\3\5\n\w\i\b\t\c\7\i\g\d\3\l\k\c\c\m\p\7\f\t\k\3\d\q\z\t\t\c\7\n\o\l\q\y\a\d\0\j\k\b\q\1\v\q\y\9\p\8\q\5\a\x\v\e\s\p\v\9\t\4\3\7\m\i\p\4\9\b\1\t\9\g\i\c\z\2\0\2\y\e\f\e\w\k\j\p\3\p\9\5\l\b\j\h\j\r\8\a\1\o\x\g\v\f\2\k\0\w\r\5\n\3\2\s\i\u\e\2\c\i\f\i\a\b\k\y\r\j\f\u\6\t\y\p\m\5\0\w\2\9\i\d\g\p\s\h\d\k\y\n\9\u\u\m\s\7\v\v\h\c\l\2\z\k\x\o\b\n\q\1\d\6\y\7\n\t\m\3\2\h\a\d\y\p\j\t\3\e\j\r\n\g\e\e\v\s\2\f\i\r\z\f\8\k\8\p\5\u\2\1\o\e\5\r\o\a\a\8\c\1\e\c\7\5\c\5\l\g\s\v\c\6\r\z\8\9\1\5\g\5\y\5\a\2\e\p\4\0\q\3\8\e\w\f\j\n\j\h\y\n\2\4\v\v\f\b\h\p\v\1\i\5\r\j\5\q\j\k\s\g\u\3\j\8\n\k\l\e\9\7\0\h\u\w\e\l\p\6\u\1\t\5\a\2\2\q\9\q\r\g\x\8\t\2\t\7\r\9\q\4\o\0\w\o\8\u\r\k\a\v\4\x\o\0\a\z\n\l\2\h\3\a\g\y\x\2\u\4\4\9\x\7\l\r\3\z\o\m\h\u\k\q\v\q\8\k\l\v\g\l\l\1\d\m\l\l\r\x\2\r\9\i\g\u\m\5\t\l\8\l\a\n\2\d\s\h\u\2\g\u\o\4\5\4\u\d\0\9\s\1\v\r\g\s\7\w\8\o\j\2\m\x\0\n\1\y\j\9\o\l\z\4\u\6\q\8\u\a\k\l\h\e\k\4\w\l\y\3\b\z\9\9\q\0\r\5\i\q\4\x\8\l\v\f\8\s\c\q\x\w\0\j\q\o\s\v\3\z\g\x\9\0\7\2\l\c\c\1\r\c\i\h\9\6\4\l\4\e\m\b\t\n\u\q\d\e\t\2\x\t\r\r\y\o\x\7\p\6\x\r\i\j\n\8\6\e\h\u\l\7\n\l\8\q\c\z\d\j\2\6\y\o\s\8\u\h\1\z\0\w\8\t\j\r\j\n\h\c\r\b\0\9\c\o\v\2\p\3\j\8\b\z\1\0\o\w\0\e\b\s\q\7\h\m\l\v\1\w\x\h\k\j\z\a\u\u\7\y\h\1\o\t\5\o\1\o\b\g\a\c\w\d\b\c\8\i\4\j\h\p\i\a\8\k\i\i\q\p\5\6\6\h\m\n\r\q\e\k\e\u\4\1\5\0\g\l\f\x\1\t\a\p\w\7\p\z\u\3\l\9\a\2\7\8\0\j\t\p\s\e\8\1\2\m\s\q\2\x\w\6\m\t\d\d\p\m\a\d\4\s\4\d\6\a\j\6\b\s\h\b\c\u\w\t\6\q\4\z\5\i\i\o\2\f\u\v\v\1\0\p\h\c\7\2\d\9\2\v\m\k\t\k\k\d\5\6\a\8\g\j\r\y\o\0\n\t\2\d\q\k\o\j\2\i\y\2\j\b\z\b\t\7\l\7\s\4\d\5\d\6\h\q\o\p\2\q\5\v\o\b\8\g\4\6\j\h\p\m\7\b\s\x\5\u\p\5\5\7\3\d\4\i\u\o\k\b\q\f\0\6\w\c\7\s\r\q\j\i\7\i\4\m\5\5\0\3\f\j\e\j\t\p\3\4\5\1\f\i\t\8\i\6\k\m\d\8\x\j\q\o\b\n\m\r\a\t\i\9\b\d\h\2\v\k\f\4\t\k\c\r\d\b\q\a\7\l\f\8\d\v\s\1\p\k\8\8\4\l\h\s\4\j\8\m\q\6\p\y\q\p\4\1\2\0\h\v\c\v\v\m\x\g\i\u\h\g\0\c\m\g\7\7\8\q\2\0\d\m\h\b\w\w\b\8\k\l\p\8\c\c\f\g\e\a\i\z\k\g\o\5\h\i\j\7\e\4\n\f\r\6\8\b\1\u\j\q\k\6\8\4\0\w\s\1\c\x\k\7\e\c\m\7\b\4\q\3\0\p\0\5\w\n\c\r\4\t\0\l\r\o\e\j\i\d\o\2\f\m\6\4\9\7\8\u\b\y\e\g\t\w\a\5\r\j\n\h\u\5\j\q\h\d\m\a\z\5\s\p\m\n\8\n\c\2\f\t\l\b\0\i\o\u\z\m\4\b\o\f\d\h\s\d\v\v\x\o\e\m\k\p\x\1\5\d\t\u\l\r\m\b\1\j\9\v\a\4\s\m\d\3\0\m\2\0\0\a\e\6\a\x\u\l\w\e\1\u\j\w\l\u\0\7\u\o\7\z\u\d\7\v\k\b\n\a\c\z\1\h\9\i\0\b\g\3\0\5\s\o\9\z\i\a\s\m\p\y\q\n\t\u\b\o\v\z\o\5\y\7\g\n\o\r\q\v\s\u\9\b\7\a\w\a\4\2\9\u\x\i\f\0\n\4\5\y\m\y\b\u\a\k\5\d\o\2\v\z\l\l\i\v\j\x\0\1\m\f\1\m\n\l\u ]] 00:07:49.503 00:07:49.503 real 0m0.941s 00:07:49.503 user 0m0.624s 00:07:49.503 sys 0m0.383s 00:07:49.503 16:13:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.503 16:13:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:49.503 ************************************ 00:07:49.503 END TEST dd_rw_offset 00:07:49.503 ************************************ 00:07:49.503 16:13:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:49.503 16:13:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:49.503 16:13:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:49.503 16:13:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:49.503 16:13:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:49.503 16:13:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:49.503 16:13:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:49.503 16:13:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:49.503 16:13:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:49.503 16:13:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:49.503 16:13:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.503 [2024-11-26 16:13:14.982431] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:49.503 [2024-11-26 16:13:14.982540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72122 ] 00:07:49.503 { 00:07:49.503 "subsystems": [ 00:07:49.503 { 00:07:49.503 "subsystem": "bdev", 00:07:49.503 "config": [ 00:07:49.503 { 00:07:49.503 "params": { 00:07:49.503 "trtype": "pcie", 00:07:49.503 "traddr": "0000:00:10.0", 00:07:49.503 "name": "Nvme0" 00:07:49.503 }, 00:07:49.503 "method": "bdev_nvme_attach_controller" 00:07:49.503 }, 00:07:49.503 { 00:07:49.503 "method": "bdev_wait_for_examine" 00:07:49.503 } 00:07:49.503 ] 00:07:49.503 } 00:07:49.503 ] 00:07:49.503 } 00:07:49.503 [2024-11-26 16:13:15.128930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.761 [2024-11-26 16:13:15.151996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.761 [2024-11-26 16:13:15.180984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.761  [2024-11-26T16:13:15.414Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:49.761 00:07:49.761 16:13:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.761 ************************************ 00:07:49.761 END TEST spdk_dd_basic_rw 00:07:49.761 ************************************ 00:07:49.761 00:07:49.761 real 0m13.521s 00:07:49.761 user 0m9.693s 00:07:49.761 sys 0m4.307s 00:07:49.761 16:13:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.761 16:13:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:50.021 16:13:15 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:50.021 16:13:15 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.021 16:13:15 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.021 16:13:15 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:50.021 ************************************ 00:07:50.021 START TEST spdk_dd_posix 00:07:50.021 ************************************ 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:50.022 * Looking for test storage... 00:07:50.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:50.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.022 --rc genhtml_branch_coverage=1 00:07:50.022 --rc genhtml_function_coverage=1 00:07:50.022 --rc genhtml_legend=1 00:07:50.022 --rc geninfo_all_blocks=1 00:07:50.022 --rc geninfo_unexecuted_blocks=1 00:07:50.022 00:07:50.022 ' 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:50.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.022 --rc genhtml_branch_coverage=1 00:07:50.022 --rc genhtml_function_coverage=1 00:07:50.022 --rc genhtml_legend=1 00:07:50.022 --rc geninfo_all_blocks=1 00:07:50.022 --rc geninfo_unexecuted_blocks=1 00:07:50.022 00:07:50.022 ' 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:50.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.022 --rc genhtml_branch_coverage=1 00:07:50.022 --rc genhtml_function_coverage=1 00:07:50.022 --rc genhtml_legend=1 00:07:50.022 --rc geninfo_all_blocks=1 00:07:50.022 --rc geninfo_unexecuted_blocks=1 00:07:50.022 00:07:50.022 ' 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:50.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.022 --rc genhtml_branch_coverage=1 00:07:50.022 --rc genhtml_function_coverage=1 00:07:50.022 --rc genhtml_legend=1 00:07:50.022 --rc geninfo_all_blocks=1 00:07:50.022 --rc geninfo_unexecuted_blocks=1 00:07:50.022 00:07:50.022 ' 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:50.022 * First test run, liburing in use 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.022 16:13:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:50.282 ************************************ 00:07:50.282 START TEST dd_flag_append 00:07:50.282 ************************************ 00:07:50.282 16:13:15 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:07:50.282 16:13:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:50.282 16:13:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:50.282 16:13:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:50.282 16:13:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:50.282 16:13:15 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:50.282 16:13:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=udiuaeua7tkxsz9cnmyku8qorb8xat0r 00:07:50.282 16:13:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:50.282 16:13:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:50.282 16:13:15 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:50.282 16:13:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=8u86q69fjjexv70ba1om0mdai5845xr9 00:07:50.282 16:13:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s udiuaeua7tkxsz9cnmyku8qorb8xat0r 00:07:50.282 16:13:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 8u86q69fjjexv70ba1om0mdai5845xr9 00:07:50.282 16:13:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:50.282 [2024-11-26 16:13:15.739752] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:50.282 [2024-11-26 16:13:15.740182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72189 ] 00:07:50.282 [2024-11-26 16:13:15.886968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.282 [2024-11-26 16:13:15.909441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.542 [2024-11-26 16:13:15.939381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.542  [2024-11-26T16:13:16.195Z] Copying: 32/32 [B] (average 31 kBps) 00:07:50.542 00:07:50.542 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 8u86q69fjjexv70ba1om0mdai5845xr9udiuaeua7tkxsz9cnmyku8qorb8xat0r == \8\u\8\6\q\6\9\f\j\j\e\x\v\7\0\b\a\1\o\m\0\m\d\a\i\5\8\4\5\x\r\9\u\d\i\u\a\e\u\a\7\t\k\x\s\z\9\c\n\m\y\k\u\8\q\o\r\b\8\x\a\t\0\r ]] 00:07:50.542 00:07:50.542 real 0m0.404s 00:07:50.542 user 0m0.203s 00:07:50.542 sys 0m0.173s 00:07:50.542 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.542 ************************************ 00:07:50.542 END TEST dd_flag_append 00:07:50.542 ************************************ 00:07:50.542 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:50.542 16:13:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:50.542 16:13:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.542 16:13:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.542 16:13:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:50.542 ************************************ 00:07:50.542 START TEST dd_flag_directory 00:07:50.542 ************************************ 00:07:50.542 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:07:50.542 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.542 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:50.542 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.542 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.542 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.542 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.542 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.542 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.543 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.543 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.543 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.543 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.803 [2024-11-26 16:13:16.195932] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:50.803 [2024-11-26 16:13:16.196057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72217 ] 00:07:50.803 [2024-11-26 16:13:16.342875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.803 [2024-11-26 16:13:16.363034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.803 [2024-11-26 16:13:16.391233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.803 [2024-11-26 16:13:16.406478] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:50.803 [2024-11-26 16:13:16.406528] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:50.803 [2024-11-26 16:13:16.406560] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.062 [2024-11-26 16:13:16.465247] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.062 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:51.062 [2024-11-26 16:13:16.564860] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:51.063 [2024-11-26 16:13:16.565122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72227 ] 00:07:51.063 [2024-11-26 16:13:16.703928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.322 [2024-11-26 16:13:16.723766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.322 [2024-11-26 16:13:16.752392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.322 [2024-11-26 16:13:16.766758] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:51.322 [2024-11-26 16:13:16.767070] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:51.322 [2024-11-26 16:13:16.767110] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.322 [2024-11-26 16:13:16.822478] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.322 00:07:51.322 real 0m0.744s 00:07:51.322 user 0m0.342s 00:07:51.322 sys 0m0.190s 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.322 ************************************ 00:07:51.322 END TEST dd_flag_directory 00:07:51.322 ************************************ 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:51.322 ************************************ 00:07:51.322 START TEST dd_flag_nofollow 00:07:51.322 ************************************ 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.322 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.323 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.323 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.323 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.323 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.323 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.323 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.323 16:13:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.582 [2024-11-26 16:13:16.991938] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:51.582 [2024-11-26 16:13:16.992315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72254 ] 00:07:51.582 [2024-11-26 16:13:17.135672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.582 [2024-11-26 16:13:17.155897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.582 [2024-11-26 16:13:17.185695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.582 [2024-11-26 16:13:17.202148] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:51.582 [2024-11-26 16:13:17.202194] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:51.582 [2024-11-26 16:13:17.202227] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.844 [2024-11-26 16:13:17.263791] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.844 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:51.844 [2024-11-26 16:13:17.360273] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:51.844 [2024-11-26 16:13:17.360673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72265 ] 00:07:52.104 [2024-11-26 16:13:17.501933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.104 [2024-11-26 16:13:17.522503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.104 [2024-11-26 16:13:17.550508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.104 [2024-11-26 16:13:17.565747] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:52.104 [2024-11-26 16:13:17.566043] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:52.104 [2024-11-26 16:13:17.566085] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.104 [2024-11-26 16:13:17.629574] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:52.104 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:52.104 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:52.104 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:52.104 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:52.104 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:52.104 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:52.104 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:52.104 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:52.104 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:52.104 16:13:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.364 [2024-11-26 16:13:17.761781] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:52.364 [2024-11-26 16:13:17.761895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72267 ] 00:07:52.364 [2024-11-26 16:13:17.903863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.364 [2024-11-26 16:13:17.923755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.364 [2024-11-26 16:13:17.952627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.364  [2024-11-26T16:13:18.276Z] Copying: 512/512 [B] (average 500 kBps) 00:07:52.623 00:07:52.623 ************************************ 00:07:52.623 END TEST dd_flag_nofollow 00:07:52.623 ************************************ 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 1j7ifmi71z957nitldg0yq57dr0rpjq26ynt2pm5l3kkko85xpufwcb3jta7fmrarzdw5pvjx34znwzyiy30r5jbvego1e5tlfod2tvk0fj1f7f30ayn6r26of9qfnljohs5ygh1k4q3gfqlu9bi3iqc01gp5fcd1z8wu056158bkag2sqmynxx8vvbsnxf9ds2mwz62jlivk0bh3no05elyx3oux3m3oj44odpoerscuir5et6jkxl9ie27u23vl6vd7ppyx8ks2r8oilstd2nin5dd26fnsc4a8x7da4ooz4022o9kjzyiwxy4y5uajsm5b65vpgz6eejw9twk8zmzqlurd7v9y5ne0ih5uflycbcd8s66t3px03nkwzahpobchkrsj40zqglqerqp3cvt2vlt02dilaey3owghaovxn8bw407wsjork9x2dxist4v1wh1tn88xxbue0tlu5qeiqxlpymjf0pp4bpcwzjh41m380q7pgibhapo269c == \1\j\7\i\f\m\i\7\1\z\9\5\7\n\i\t\l\d\g\0\y\q\5\7\d\r\0\r\p\j\q\2\6\y\n\t\2\p\m\5\l\3\k\k\k\o\8\5\x\p\u\f\w\c\b\3\j\t\a\7\f\m\r\a\r\z\d\w\5\p\v\j\x\3\4\z\n\w\z\y\i\y\3\0\r\5\j\b\v\e\g\o\1\e\5\t\l\f\o\d\2\t\v\k\0\f\j\1\f\7\f\3\0\a\y\n\6\r\2\6\o\f\9\q\f\n\l\j\o\h\s\5\y\g\h\1\k\4\q\3\g\f\q\l\u\9\b\i\3\i\q\c\0\1\g\p\5\f\c\d\1\z\8\w\u\0\5\6\1\5\8\b\k\a\g\2\s\q\m\y\n\x\x\8\v\v\b\s\n\x\f\9\d\s\2\m\w\z\6\2\j\l\i\v\k\0\b\h\3\n\o\0\5\e\l\y\x\3\o\u\x\3\m\3\o\j\4\4\o\d\p\o\e\r\s\c\u\i\r\5\e\t\6\j\k\x\l\9\i\e\2\7\u\2\3\v\l\6\v\d\7\p\p\y\x\8\k\s\2\r\8\o\i\l\s\t\d\2\n\i\n\5\d\d\2\6\f\n\s\c\4\a\8\x\7\d\a\4\o\o\z\4\0\2\2\o\9\k\j\z\y\i\w\x\y\4\y\5\u\a\j\s\m\5\b\6\5\v\p\g\z\6\e\e\j\w\9\t\w\k\8\z\m\z\q\l\u\r\d\7\v\9\y\5\n\e\0\i\h\5\u\f\l\y\c\b\c\d\8\s\6\6\t\3\p\x\0\3\n\k\w\z\a\h\p\o\b\c\h\k\r\s\j\4\0\z\q\g\l\q\e\r\q\p\3\c\v\t\2\v\l\t\0\2\d\i\l\a\e\y\3\o\w\g\h\a\o\v\x\n\8\b\w\4\0\7\w\s\j\o\r\k\9\x\2\d\x\i\s\t\4\v\1\w\h\1\t\n\8\8\x\x\b\u\e\0\t\l\u\5\q\e\i\q\x\l\p\y\m\j\f\0\p\p\4\b\p\c\w\z\j\h\4\1\m\3\8\0\q\7\p\g\i\b\h\a\p\o\2\6\9\c ]] 00:07:52.623 00:07:52.623 real 0m1.164s 00:07:52.623 user 0m0.573s 00:07:52.623 sys 0m0.349s 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:52.623 ************************************ 00:07:52.623 START TEST dd_flag_noatime 00:07:52.623 ************************************ 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732637597 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732637598 00:07:52.623 16:13:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:53.576 16:13:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:53.863 [2024-11-26 16:13:19.231625] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:53.863 [2024-11-26 16:13:19.231733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72309 ] 00:07:53.863 [2024-11-26 16:13:19.384431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.863 [2024-11-26 16:13:19.409294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.863 [2024-11-26 16:13:19.442595] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.863  [2024-11-26T16:13:19.775Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.122 00:07:54.122 16:13:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:54.122 16:13:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732637597 )) 00:07:54.122 16:13:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.122 16:13:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732637598 )) 00:07:54.122 16:13:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.122 [2024-11-26 16:13:19.642930] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:54.122 [2024-11-26 16:13:19.643040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72323 ] 00:07:54.382 [2024-11-26 16:13:19.789395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.382 [2024-11-26 16:13:19.808945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.382 [2024-11-26 16:13:19.836678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.382  [2024-11-26T16:13:20.035Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.382 00:07:54.382 16:13:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:54.382 ************************************ 00:07:54.382 END TEST dd_flag_noatime 00:07:54.382 ************************************ 00:07:54.382 16:13:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732637599 )) 00:07:54.382 00:07:54.382 real 0m1.832s 00:07:54.382 user 0m0.407s 00:07:54.382 sys 0m0.372s 00:07:54.382 16:13:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.382 16:13:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:54.382 16:13:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:54.382 16:13:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.382 16:13:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.382 16:13:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:54.642 ************************************ 00:07:54.642 START TEST dd_flags_misc 00:07:54.642 ************************************ 00:07:54.642 16:13:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:07:54.642 16:13:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:54.642 16:13:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:54.642 16:13:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:54.642 16:13:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:54.642 16:13:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:54.642 16:13:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:54.642 16:13:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:54.642 16:13:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.642 16:13:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:54.642 [2024-11-26 16:13:20.093042] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:54.642 [2024-11-26 16:13:20.093288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72346 ] 00:07:54.642 [2024-11-26 16:13:20.231768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.642 [2024-11-26 16:13:20.251901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.642 [2024-11-26 16:13:20.285839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.901  [2024-11-26T16:13:20.554Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.901 00:07:54.902 16:13:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7uxkyj2n1dhjcxr4lt3ookc49bcqj61za1nlqi6xe91fmguag0h7uau5xt7g5mua1qu4umqn480gcf5evaeeopmvikss4an4b83n3cehou0lu3kb0q7tcyls5xeuvg8eyyycdc8x3c45f0wgssbejgms97m2oxlzzlynw6hi356ro7zgugy5xcb2vkynw2mp3sf0rv7qc9za9rrgsn1q6zem0qjlylc2n4gqwd7qpv3rblj7fxf0eq3jcibby4q3fb5067mm0olvd69xzrxwqayaimyzehoo3ae5j8wb3bvucchlmp2fq25kudqywft5xt796calhgmw23lsmrl86hxbr66wzlhss97geh3i9276cjjasx28qoy2dc49pqgio88eyns8fgvyberxl2lznom2ox13pou909qf3dzokru993x2j9tzfuqxulvknikpbyglyov710i8tv9o7bd5m94v40x92ivs0hohzsn8k3h8sa67xywhjfqjtf1ys6st == \7\u\x\k\y\j\2\n\1\d\h\j\c\x\r\4\l\t\3\o\o\k\c\4\9\b\c\q\j\6\1\z\a\1\n\l\q\i\6\x\e\9\1\f\m\g\u\a\g\0\h\7\u\a\u\5\x\t\7\g\5\m\u\a\1\q\u\4\u\m\q\n\4\8\0\g\c\f\5\e\v\a\e\e\o\p\m\v\i\k\s\s\4\a\n\4\b\8\3\n\3\c\e\h\o\u\0\l\u\3\k\b\0\q\7\t\c\y\l\s\5\x\e\u\v\g\8\e\y\y\y\c\d\c\8\x\3\c\4\5\f\0\w\g\s\s\b\e\j\g\m\s\9\7\m\2\o\x\l\z\z\l\y\n\w\6\h\i\3\5\6\r\o\7\z\g\u\g\y\5\x\c\b\2\v\k\y\n\w\2\m\p\3\s\f\0\r\v\7\q\c\9\z\a\9\r\r\g\s\n\1\q\6\z\e\m\0\q\j\l\y\l\c\2\n\4\g\q\w\d\7\q\p\v\3\r\b\l\j\7\f\x\f\0\e\q\3\j\c\i\b\b\y\4\q\3\f\b\5\0\6\7\m\m\0\o\l\v\d\6\9\x\z\r\x\w\q\a\y\a\i\m\y\z\e\h\o\o\3\a\e\5\j\8\w\b\3\b\v\u\c\c\h\l\m\p\2\f\q\2\5\k\u\d\q\y\w\f\t\5\x\t\7\9\6\c\a\l\h\g\m\w\2\3\l\s\m\r\l\8\6\h\x\b\r\6\6\w\z\l\h\s\s\9\7\g\e\h\3\i\9\2\7\6\c\j\j\a\s\x\2\8\q\o\y\2\d\c\4\9\p\q\g\i\o\8\8\e\y\n\s\8\f\g\v\y\b\e\r\x\l\2\l\z\n\o\m\2\o\x\1\3\p\o\u\9\0\9\q\f\3\d\z\o\k\r\u\9\9\3\x\2\j\9\t\z\f\u\q\x\u\l\v\k\n\i\k\p\b\y\g\l\y\o\v\7\1\0\i\8\t\v\9\o\7\b\d\5\m\9\4\v\4\0\x\9\2\i\v\s\0\h\o\h\z\s\n\8\k\3\h\8\s\a\6\7\x\y\w\h\j\f\q\j\t\f\1\y\s\6\s\t ]] 00:07:54.902 16:13:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.902 16:13:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:54.902 [2024-11-26 16:13:20.456772] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:54.902 [2024-11-26 16:13:20.457159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72361 ] 00:07:55.161 [2024-11-26 16:13:20.594699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.161 [2024-11-26 16:13:20.613373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.161 [2024-11-26 16:13:20.640068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.161  [2024-11-26T16:13:20.814Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.161 00:07:55.161 16:13:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7uxkyj2n1dhjcxr4lt3ookc49bcqj61za1nlqi6xe91fmguag0h7uau5xt7g5mua1qu4umqn480gcf5evaeeopmvikss4an4b83n3cehou0lu3kb0q7tcyls5xeuvg8eyyycdc8x3c45f0wgssbejgms97m2oxlzzlynw6hi356ro7zgugy5xcb2vkynw2mp3sf0rv7qc9za9rrgsn1q6zem0qjlylc2n4gqwd7qpv3rblj7fxf0eq3jcibby4q3fb5067mm0olvd69xzrxwqayaimyzehoo3ae5j8wb3bvucchlmp2fq25kudqywft5xt796calhgmw23lsmrl86hxbr66wzlhss97geh3i9276cjjasx28qoy2dc49pqgio88eyns8fgvyberxl2lznom2ox13pou909qf3dzokru993x2j9tzfuqxulvknikpbyglyov710i8tv9o7bd5m94v40x92ivs0hohzsn8k3h8sa67xywhjfqjtf1ys6st == \7\u\x\k\y\j\2\n\1\d\h\j\c\x\r\4\l\t\3\o\o\k\c\4\9\b\c\q\j\6\1\z\a\1\n\l\q\i\6\x\e\9\1\f\m\g\u\a\g\0\h\7\u\a\u\5\x\t\7\g\5\m\u\a\1\q\u\4\u\m\q\n\4\8\0\g\c\f\5\e\v\a\e\e\o\p\m\v\i\k\s\s\4\a\n\4\b\8\3\n\3\c\e\h\o\u\0\l\u\3\k\b\0\q\7\t\c\y\l\s\5\x\e\u\v\g\8\e\y\y\y\c\d\c\8\x\3\c\4\5\f\0\w\g\s\s\b\e\j\g\m\s\9\7\m\2\o\x\l\z\z\l\y\n\w\6\h\i\3\5\6\r\o\7\z\g\u\g\y\5\x\c\b\2\v\k\y\n\w\2\m\p\3\s\f\0\r\v\7\q\c\9\z\a\9\r\r\g\s\n\1\q\6\z\e\m\0\q\j\l\y\l\c\2\n\4\g\q\w\d\7\q\p\v\3\r\b\l\j\7\f\x\f\0\e\q\3\j\c\i\b\b\y\4\q\3\f\b\5\0\6\7\m\m\0\o\l\v\d\6\9\x\z\r\x\w\q\a\y\a\i\m\y\z\e\h\o\o\3\a\e\5\j\8\w\b\3\b\v\u\c\c\h\l\m\p\2\f\q\2\5\k\u\d\q\y\w\f\t\5\x\t\7\9\6\c\a\l\h\g\m\w\2\3\l\s\m\r\l\8\6\h\x\b\r\6\6\w\z\l\h\s\s\9\7\g\e\h\3\i\9\2\7\6\c\j\j\a\s\x\2\8\q\o\y\2\d\c\4\9\p\q\g\i\o\8\8\e\y\n\s\8\f\g\v\y\b\e\r\x\l\2\l\z\n\o\m\2\o\x\1\3\p\o\u\9\0\9\q\f\3\d\z\o\k\r\u\9\9\3\x\2\j\9\t\z\f\u\q\x\u\l\v\k\n\i\k\p\b\y\g\l\y\o\v\7\1\0\i\8\t\v\9\o\7\b\d\5\m\9\4\v\4\0\x\9\2\i\v\s\0\h\o\h\z\s\n\8\k\3\h\8\s\a\6\7\x\y\w\h\j\f\q\j\t\f\1\y\s\6\s\t ]] 00:07:55.161 16:13:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.161 16:13:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:55.161 [2024-11-26 16:13:20.800273] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:55.161 [2024-11-26 16:13:20.800376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72365 ] 00:07:55.421 [2024-11-26 16:13:20.937099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.421 [2024-11-26 16:13:20.956716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.421 [2024-11-26 16:13:20.984972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.421  [2024-11-26T16:13:21.334Z] Copying: 512/512 [B] (average 100 kBps) 00:07:55.681 00:07:55.681 16:13:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7uxkyj2n1dhjcxr4lt3ookc49bcqj61za1nlqi6xe91fmguag0h7uau5xt7g5mua1qu4umqn480gcf5evaeeopmvikss4an4b83n3cehou0lu3kb0q7tcyls5xeuvg8eyyycdc8x3c45f0wgssbejgms97m2oxlzzlynw6hi356ro7zgugy5xcb2vkynw2mp3sf0rv7qc9za9rrgsn1q6zem0qjlylc2n4gqwd7qpv3rblj7fxf0eq3jcibby4q3fb5067mm0olvd69xzrxwqayaimyzehoo3ae5j8wb3bvucchlmp2fq25kudqywft5xt796calhgmw23lsmrl86hxbr66wzlhss97geh3i9276cjjasx28qoy2dc49pqgio88eyns8fgvyberxl2lznom2ox13pou909qf3dzokru993x2j9tzfuqxulvknikpbyglyov710i8tv9o7bd5m94v40x92ivs0hohzsn8k3h8sa67xywhjfqjtf1ys6st == \7\u\x\k\y\j\2\n\1\d\h\j\c\x\r\4\l\t\3\o\o\k\c\4\9\b\c\q\j\6\1\z\a\1\n\l\q\i\6\x\e\9\1\f\m\g\u\a\g\0\h\7\u\a\u\5\x\t\7\g\5\m\u\a\1\q\u\4\u\m\q\n\4\8\0\g\c\f\5\e\v\a\e\e\o\p\m\v\i\k\s\s\4\a\n\4\b\8\3\n\3\c\e\h\o\u\0\l\u\3\k\b\0\q\7\t\c\y\l\s\5\x\e\u\v\g\8\e\y\y\y\c\d\c\8\x\3\c\4\5\f\0\w\g\s\s\b\e\j\g\m\s\9\7\m\2\o\x\l\z\z\l\y\n\w\6\h\i\3\5\6\r\o\7\z\g\u\g\y\5\x\c\b\2\v\k\y\n\w\2\m\p\3\s\f\0\r\v\7\q\c\9\z\a\9\r\r\g\s\n\1\q\6\z\e\m\0\q\j\l\y\l\c\2\n\4\g\q\w\d\7\q\p\v\3\r\b\l\j\7\f\x\f\0\e\q\3\j\c\i\b\b\y\4\q\3\f\b\5\0\6\7\m\m\0\o\l\v\d\6\9\x\z\r\x\w\q\a\y\a\i\m\y\z\e\h\o\o\3\a\e\5\j\8\w\b\3\b\v\u\c\c\h\l\m\p\2\f\q\2\5\k\u\d\q\y\w\f\t\5\x\t\7\9\6\c\a\l\h\g\m\w\2\3\l\s\m\r\l\8\6\h\x\b\r\6\6\w\z\l\h\s\s\9\7\g\e\h\3\i\9\2\7\6\c\j\j\a\s\x\2\8\q\o\y\2\d\c\4\9\p\q\g\i\o\8\8\e\y\n\s\8\f\g\v\y\b\e\r\x\l\2\l\z\n\o\m\2\o\x\1\3\p\o\u\9\0\9\q\f\3\d\z\o\k\r\u\9\9\3\x\2\j\9\t\z\f\u\q\x\u\l\v\k\n\i\k\p\b\y\g\l\y\o\v\7\1\0\i\8\t\v\9\o\7\b\d\5\m\9\4\v\4\0\x\9\2\i\v\s\0\h\o\h\z\s\n\8\k\3\h\8\s\a\6\7\x\y\w\h\j\f\q\j\t\f\1\y\s\6\s\t ]] 00:07:55.681 16:13:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.681 16:13:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:55.681 [2024-11-26 16:13:21.175578] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:55.681 [2024-11-26 16:13:21.175981] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72373 ] 00:07:55.681 [2024-11-26 16:13:21.317588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.940 [2024-11-26 16:13:21.339906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.940 [2024-11-26 16:13:21.370050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.940  [2024-11-26T16:13:21.593Z] Copying: 512/512 [B] (average 250 kBps) 00:07:55.940 00:07:55.941 16:13:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 7uxkyj2n1dhjcxr4lt3ookc49bcqj61za1nlqi6xe91fmguag0h7uau5xt7g5mua1qu4umqn480gcf5evaeeopmvikss4an4b83n3cehou0lu3kb0q7tcyls5xeuvg8eyyycdc8x3c45f0wgssbejgms97m2oxlzzlynw6hi356ro7zgugy5xcb2vkynw2mp3sf0rv7qc9za9rrgsn1q6zem0qjlylc2n4gqwd7qpv3rblj7fxf0eq3jcibby4q3fb5067mm0olvd69xzrxwqayaimyzehoo3ae5j8wb3bvucchlmp2fq25kudqywft5xt796calhgmw23lsmrl86hxbr66wzlhss97geh3i9276cjjasx28qoy2dc49pqgio88eyns8fgvyberxl2lznom2ox13pou909qf3dzokru993x2j9tzfuqxulvknikpbyglyov710i8tv9o7bd5m94v40x92ivs0hohzsn8k3h8sa67xywhjfqjtf1ys6st == \7\u\x\k\y\j\2\n\1\d\h\j\c\x\r\4\l\t\3\o\o\k\c\4\9\b\c\q\j\6\1\z\a\1\n\l\q\i\6\x\e\9\1\f\m\g\u\a\g\0\h\7\u\a\u\5\x\t\7\g\5\m\u\a\1\q\u\4\u\m\q\n\4\8\0\g\c\f\5\e\v\a\e\e\o\p\m\v\i\k\s\s\4\a\n\4\b\8\3\n\3\c\e\h\o\u\0\l\u\3\k\b\0\q\7\t\c\y\l\s\5\x\e\u\v\g\8\e\y\y\y\c\d\c\8\x\3\c\4\5\f\0\w\g\s\s\b\e\j\g\m\s\9\7\m\2\o\x\l\z\z\l\y\n\w\6\h\i\3\5\6\r\o\7\z\g\u\g\y\5\x\c\b\2\v\k\y\n\w\2\m\p\3\s\f\0\r\v\7\q\c\9\z\a\9\r\r\g\s\n\1\q\6\z\e\m\0\q\j\l\y\l\c\2\n\4\g\q\w\d\7\q\p\v\3\r\b\l\j\7\f\x\f\0\e\q\3\j\c\i\b\b\y\4\q\3\f\b\5\0\6\7\m\m\0\o\l\v\d\6\9\x\z\r\x\w\q\a\y\a\i\m\y\z\e\h\o\o\3\a\e\5\j\8\w\b\3\b\v\u\c\c\h\l\m\p\2\f\q\2\5\k\u\d\q\y\w\f\t\5\x\t\7\9\6\c\a\l\h\g\m\w\2\3\l\s\m\r\l\8\6\h\x\b\r\6\6\w\z\l\h\s\s\9\7\g\e\h\3\i\9\2\7\6\c\j\j\a\s\x\2\8\q\o\y\2\d\c\4\9\p\q\g\i\o\8\8\e\y\n\s\8\f\g\v\y\b\e\r\x\l\2\l\z\n\o\m\2\o\x\1\3\p\o\u\9\0\9\q\f\3\d\z\o\k\r\u\9\9\3\x\2\j\9\t\z\f\u\q\x\u\l\v\k\n\i\k\p\b\y\g\l\y\o\v\7\1\0\i\8\t\v\9\o\7\b\d\5\m\9\4\v\4\0\x\9\2\i\v\s\0\h\o\h\z\s\n\8\k\3\h\8\s\a\6\7\x\y\w\h\j\f\q\j\t\f\1\y\s\6\s\t ]] 00:07:55.941 16:13:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:55.941 16:13:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:55.941 16:13:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:55.941 16:13:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:55.941 16:13:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.941 16:13:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:55.941 [2024-11-26 16:13:21.569277] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:55.941 [2024-11-26 16:13:21.569420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72384 ] 00:07:56.200 [2024-11-26 16:13:21.714617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.200 [2024-11-26 16:13:21.733197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.200 [2024-11-26 16:13:21.759054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.200  [2024-11-26T16:13:22.112Z] Copying: 512/512 [B] (average 500 kBps) 00:07:56.459 00:07:56.459 16:13:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f1i4zwxf5mjatgpzn00dgtzdcg9q9o4z99qzoq1z26xn82s7j06hlz7rbioal2wls4rnrlirevp098z3hdkh78kxmu372w7z4gyo7wknyltymm7ignjfe64dpw2c9xocna3kbrubt23ricsb5kuek3ykykzr9129tcjwefyc8720fc21awi3kzk7eba4e0wfdnlpz25o6n414qc8d0bg9xajrfqu3fh6b5v4mcnae2t762gsfk6q1rbhdnah7kd1wrbetl0x8r74iuzdkkxohumgkpgjak6dgas64zavh0vb92a01e7hjc3y9cg95viwo6zpfs0mukpcyanv3h5m4fsaasnzfg6g5i1ek6ht3ivw2u5uzt9ss1wz2ovqvh136wveq4brza5t1jpybdorx26d5e90l4bq4ptrlcyvars1nq1g6chlndvu1xmo0hqsw1zjvl48mvion2ojoe29ptxq0rzi2867f5u2do90j597u6d8j8uf877fpvr506dy == \f\1\i\4\z\w\x\f\5\m\j\a\t\g\p\z\n\0\0\d\g\t\z\d\c\g\9\q\9\o\4\z\9\9\q\z\o\q\1\z\2\6\x\n\8\2\s\7\j\0\6\h\l\z\7\r\b\i\o\a\l\2\w\l\s\4\r\n\r\l\i\r\e\v\p\0\9\8\z\3\h\d\k\h\7\8\k\x\m\u\3\7\2\w\7\z\4\g\y\o\7\w\k\n\y\l\t\y\m\m\7\i\g\n\j\f\e\6\4\d\p\w\2\c\9\x\o\c\n\a\3\k\b\r\u\b\t\2\3\r\i\c\s\b\5\k\u\e\k\3\y\k\y\k\z\r\9\1\2\9\t\c\j\w\e\f\y\c\8\7\2\0\f\c\2\1\a\w\i\3\k\z\k\7\e\b\a\4\e\0\w\f\d\n\l\p\z\2\5\o\6\n\4\1\4\q\c\8\d\0\b\g\9\x\a\j\r\f\q\u\3\f\h\6\b\5\v\4\m\c\n\a\e\2\t\7\6\2\g\s\f\k\6\q\1\r\b\h\d\n\a\h\7\k\d\1\w\r\b\e\t\l\0\x\8\r\7\4\i\u\z\d\k\k\x\o\h\u\m\g\k\p\g\j\a\k\6\d\g\a\s\6\4\z\a\v\h\0\v\b\9\2\a\0\1\e\7\h\j\c\3\y\9\c\g\9\5\v\i\w\o\6\z\p\f\s\0\m\u\k\p\c\y\a\n\v\3\h\5\m\4\f\s\a\a\s\n\z\f\g\6\g\5\i\1\e\k\6\h\t\3\i\v\w\2\u\5\u\z\t\9\s\s\1\w\z\2\o\v\q\v\h\1\3\6\w\v\e\q\4\b\r\z\a\5\t\1\j\p\y\b\d\o\r\x\2\6\d\5\e\9\0\l\4\b\q\4\p\t\r\l\c\y\v\a\r\s\1\n\q\1\g\6\c\h\l\n\d\v\u\1\x\m\o\0\h\q\s\w\1\z\j\v\l\4\8\m\v\i\o\n\2\o\j\o\e\2\9\p\t\x\q\0\r\z\i\2\8\6\7\f\5\u\2\d\o\9\0\j\5\9\7\u\6\d\8\j\8\u\f\8\7\7\f\p\v\r\5\0\6\d\y ]] 00:07:56.459 16:13:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.459 16:13:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:56.459 [2024-11-26 16:13:21.926095] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:56.459 [2024-11-26 16:13:21.926206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72388 ] 00:07:56.459 [2024-11-26 16:13:22.072237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.459 [2024-11-26 16:13:22.092069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.718 [2024-11-26 16:13:22.120756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.719  [2024-11-26T16:13:22.372Z] Copying: 512/512 [B] (average 500 kBps) 00:07:56.719 00:07:56.719 16:13:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f1i4zwxf5mjatgpzn00dgtzdcg9q9o4z99qzoq1z26xn82s7j06hlz7rbioal2wls4rnrlirevp098z3hdkh78kxmu372w7z4gyo7wknyltymm7ignjfe64dpw2c9xocna3kbrubt23ricsb5kuek3ykykzr9129tcjwefyc8720fc21awi3kzk7eba4e0wfdnlpz25o6n414qc8d0bg9xajrfqu3fh6b5v4mcnae2t762gsfk6q1rbhdnah7kd1wrbetl0x8r74iuzdkkxohumgkpgjak6dgas64zavh0vb92a01e7hjc3y9cg95viwo6zpfs0mukpcyanv3h5m4fsaasnzfg6g5i1ek6ht3ivw2u5uzt9ss1wz2ovqvh136wveq4brza5t1jpybdorx26d5e90l4bq4ptrlcyvars1nq1g6chlndvu1xmo0hqsw1zjvl48mvion2ojoe29ptxq0rzi2867f5u2do90j597u6d8j8uf877fpvr506dy == \f\1\i\4\z\w\x\f\5\m\j\a\t\g\p\z\n\0\0\d\g\t\z\d\c\g\9\q\9\o\4\z\9\9\q\z\o\q\1\z\2\6\x\n\8\2\s\7\j\0\6\h\l\z\7\r\b\i\o\a\l\2\w\l\s\4\r\n\r\l\i\r\e\v\p\0\9\8\z\3\h\d\k\h\7\8\k\x\m\u\3\7\2\w\7\z\4\g\y\o\7\w\k\n\y\l\t\y\m\m\7\i\g\n\j\f\e\6\4\d\p\w\2\c\9\x\o\c\n\a\3\k\b\r\u\b\t\2\3\r\i\c\s\b\5\k\u\e\k\3\y\k\y\k\z\r\9\1\2\9\t\c\j\w\e\f\y\c\8\7\2\0\f\c\2\1\a\w\i\3\k\z\k\7\e\b\a\4\e\0\w\f\d\n\l\p\z\2\5\o\6\n\4\1\4\q\c\8\d\0\b\g\9\x\a\j\r\f\q\u\3\f\h\6\b\5\v\4\m\c\n\a\e\2\t\7\6\2\g\s\f\k\6\q\1\r\b\h\d\n\a\h\7\k\d\1\w\r\b\e\t\l\0\x\8\r\7\4\i\u\z\d\k\k\x\o\h\u\m\g\k\p\g\j\a\k\6\d\g\a\s\6\4\z\a\v\h\0\v\b\9\2\a\0\1\e\7\h\j\c\3\y\9\c\g\9\5\v\i\w\o\6\z\p\f\s\0\m\u\k\p\c\y\a\n\v\3\h\5\m\4\f\s\a\a\s\n\z\f\g\6\g\5\i\1\e\k\6\h\t\3\i\v\w\2\u\5\u\z\t\9\s\s\1\w\z\2\o\v\q\v\h\1\3\6\w\v\e\q\4\b\r\z\a\5\t\1\j\p\y\b\d\o\r\x\2\6\d\5\e\9\0\l\4\b\q\4\p\t\r\l\c\y\v\a\r\s\1\n\q\1\g\6\c\h\l\n\d\v\u\1\x\m\o\0\h\q\s\w\1\z\j\v\l\4\8\m\v\i\o\n\2\o\j\o\e\2\9\p\t\x\q\0\r\z\i\2\8\6\7\f\5\u\2\d\o\9\0\j\5\9\7\u\6\d\8\j\8\u\f\8\7\7\f\p\v\r\5\0\6\d\y ]] 00:07:56.719 16:13:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.719 16:13:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:56.719 [2024-11-26 16:13:22.298946] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:56.719 [2024-11-26 16:13:22.299059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72397 ] 00:07:56.978 [2024-11-26 16:13:22.443275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.978 [2024-11-26 16:13:22.462841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.978 [2024-11-26 16:13:22.494805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.978  [2024-11-26T16:13:22.631Z] Copying: 512/512 [B] (average 250 kBps) 00:07:56.978 00:07:56.979 16:13:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f1i4zwxf5mjatgpzn00dgtzdcg9q9o4z99qzoq1z26xn82s7j06hlz7rbioal2wls4rnrlirevp098z3hdkh78kxmu372w7z4gyo7wknyltymm7ignjfe64dpw2c9xocna3kbrubt23ricsb5kuek3ykykzr9129tcjwefyc8720fc21awi3kzk7eba4e0wfdnlpz25o6n414qc8d0bg9xajrfqu3fh6b5v4mcnae2t762gsfk6q1rbhdnah7kd1wrbetl0x8r74iuzdkkxohumgkpgjak6dgas64zavh0vb92a01e7hjc3y9cg95viwo6zpfs0mukpcyanv3h5m4fsaasnzfg6g5i1ek6ht3ivw2u5uzt9ss1wz2ovqvh136wveq4brza5t1jpybdorx26d5e90l4bq4ptrlcyvars1nq1g6chlndvu1xmo0hqsw1zjvl48mvion2ojoe29ptxq0rzi2867f5u2do90j597u6d8j8uf877fpvr506dy == \f\1\i\4\z\w\x\f\5\m\j\a\t\g\p\z\n\0\0\d\g\t\z\d\c\g\9\q\9\o\4\z\9\9\q\z\o\q\1\z\2\6\x\n\8\2\s\7\j\0\6\h\l\z\7\r\b\i\o\a\l\2\w\l\s\4\r\n\r\l\i\r\e\v\p\0\9\8\z\3\h\d\k\h\7\8\k\x\m\u\3\7\2\w\7\z\4\g\y\o\7\w\k\n\y\l\t\y\m\m\7\i\g\n\j\f\e\6\4\d\p\w\2\c\9\x\o\c\n\a\3\k\b\r\u\b\t\2\3\r\i\c\s\b\5\k\u\e\k\3\y\k\y\k\z\r\9\1\2\9\t\c\j\w\e\f\y\c\8\7\2\0\f\c\2\1\a\w\i\3\k\z\k\7\e\b\a\4\e\0\w\f\d\n\l\p\z\2\5\o\6\n\4\1\4\q\c\8\d\0\b\g\9\x\a\j\r\f\q\u\3\f\h\6\b\5\v\4\m\c\n\a\e\2\t\7\6\2\g\s\f\k\6\q\1\r\b\h\d\n\a\h\7\k\d\1\w\r\b\e\t\l\0\x\8\r\7\4\i\u\z\d\k\k\x\o\h\u\m\g\k\p\g\j\a\k\6\d\g\a\s\6\4\z\a\v\h\0\v\b\9\2\a\0\1\e\7\h\j\c\3\y\9\c\g\9\5\v\i\w\o\6\z\p\f\s\0\m\u\k\p\c\y\a\n\v\3\h\5\m\4\f\s\a\a\s\n\z\f\g\6\g\5\i\1\e\k\6\h\t\3\i\v\w\2\u\5\u\z\t\9\s\s\1\w\z\2\o\v\q\v\h\1\3\6\w\v\e\q\4\b\r\z\a\5\t\1\j\p\y\b\d\o\r\x\2\6\d\5\e\9\0\l\4\b\q\4\p\t\r\l\c\y\v\a\r\s\1\n\q\1\g\6\c\h\l\n\d\v\u\1\x\m\o\0\h\q\s\w\1\z\j\v\l\4\8\m\v\i\o\n\2\o\j\o\e\2\9\p\t\x\q\0\r\z\i\2\8\6\7\f\5\u\2\d\o\9\0\j\5\9\7\u\6\d\8\j\8\u\f\8\7\7\f\p\v\r\5\0\6\d\y ]] 00:07:56.979 16:13:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.979 16:13:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:57.238 [2024-11-26 16:13:22.678791] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:57.238 [2024-11-26 16:13:22.679161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72407 ] 00:07:57.238 [2024-11-26 16:13:22.816090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.238 [2024-11-26 16:13:22.835217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.238 [2024-11-26 16:13:22.864071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.238  [2024-11-26T16:13:23.150Z] Copying: 512/512 [B] (average 250 kBps) 00:07:57.497 00:07:57.498 16:13:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f1i4zwxf5mjatgpzn00dgtzdcg9q9o4z99qzoq1z26xn82s7j06hlz7rbioal2wls4rnrlirevp098z3hdkh78kxmu372w7z4gyo7wknyltymm7ignjfe64dpw2c9xocna3kbrubt23ricsb5kuek3ykykzr9129tcjwefyc8720fc21awi3kzk7eba4e0wfdnlpz25o6n414qc8d0bg9xajrfqu3fh6b5v4mcnae2t762gsfk6q1rbhdnah7kd1wrbetl0x8r74iuzdkkxohumgkpgjak6dgas64zavh0vb92a01e7hjc3y9cg95viwo6zpfs0mukpcyanv3h5m4fsaasnzfg6g5i1ek6ht3ivw2u5uzt9ss1wz2ovqvh136wveq4brza5t1jpybdorx26d5e90l4bq4ptrlcyvars1nq1g6chlndvu1xmo0hqsw1zjvl48mvion2ojoe29ptxq0rzi2867f5u2do90j597u6d8j8uf877fpvr506dy == \f\1\i\4\z\w\x\f\5\m\j\a\t\g\p\z\n\0\0\d\g\t\z\d\c\g\9\q\9\o\4\z\9\9\q\z\o\q\1\z\2\6\x\n\8\2\s\7\j\0\6\h\l\z\7\r\b\i\o\a\l\2\w\l\s\4\r\n\r\l\i\r\e\v\p\0\9\8\z\3\h\d\k\h\7\8\k\x\m\u\3\7\2\w\7\z\4\g\y\o\7\w\k\n\y\l\t\y\m\m\7\i\g\n\j\f\e\6\4\d\p\w\2\c\9\x\o\c\n\a\3\k\b\r\u\b\t\2\3\r\i\c\s\b\5\k\u\e\k\3\y\k\y\k\z\r\9\1\2\9\t\c\j\w\e\f\y\c\8\7\2\0\f\c\2\1\a\w\i\3\k\z\k\7\e\b\a\4\e\0\w\f\d\n\l\p\z\2\5\o\6\n\4\1\4\q\c\8\d\0\b\g\9\x\a\j\r\f\q\u\3\f\h\6\b\5\v\4\m\c\n\a\e\2\t\7\6\2\g\s\f\k\6\q\1\r\b\h\d\n\a\h\7\k\d\1\w\r\b\e\t\l\0\x\8\r\7\4\i\u\z\d\k\k\x\o\h\u\m\g\k\p\g\j\a\k\6\d\g\a\s\6\4\z\a\v\h\0\v\b\9\2\a\0\1\e\7\h\j\c\3\y\9\c\g\9\5\v\i\w\o\6\z\p\f\s\0\m\u\k\p\c\y\a\n\v\3\h\5\m\4\f\s\a\a\s\n\z\f\g\6\g\5\i\1\e\k\6\h\t\3\i\v\w\2\u\5\u\z\t\9\s\s\1\w\z\2\o\v\q\v\h\1\3\6\w\v\e\q\4\b\r\z\a\5\t\1\j\p\y\b\d\o\r\x\2\6\d\5\e\9\0\l\4\b\q\4\p\t\r\l\c\y\v\a\r\s\1\n\q\1\g\6\c\h\l\n\d\v\u\1\x\m\o\0\h\q\s\w\1\z\j\v\l\4\8\m\v\i\o\n\2\o\j\o\e\2\9\p\t\x\q\0\r\z\i\2\8\6\7\f\5\u\2\d\o\9\0\j\5\9\7\u\6\d\8\j\8\u\f\8\7\7\f\p\v\r\5\0\6\d\y ]] 00:07:57.498 ************************************ 00:07:57.498 END TEST dd_flags_misc 00:07:57.498 ************************************ 00:07:57.498 00:07:57.498 real 0m2.953s 00:07:57.498 user 0m1.391s 00:07:57.498 sys 0m1.310s 00:07:57.498 16:13:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.498 16:13:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:57.498 * Second test run, disabling liburing, forcing AIO 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:57.498 ************************************ 00:07:57.498 START TEST dd_flag_append_forced_aio 00:07:57.498 ************************************ 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=2ez3k2enzlckukp8y74kkec2sj5s4cww 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=hbmra1ob9bo344vi8bste9orlwwq810o 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 2ez3k2enzlckukp8y74kkec2sj5s4cww 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s hbmra1ob9bo344vi8bste9orlwwq810o 00:07:57.498 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:57.498 [2024-11-26 16:13:23.103603] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:57.498 [2024-11-26 16:13:23.103903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72430 ] 00:07:57.758 [2024-11-26 16:13:23.243624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.758 [2024-11-26 16:13:23.263627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.758 [2024-11-26 16:13:23.291392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.758  [2024-11-26T16:13:23.671Z] Copying: 32/32 [B] (average 31 kBps) 00:07:58.018 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ hbmra1ob9bo344vi8bste9orlwwq810o2ez3k2enzlckukp8y74kkec2sj5s4cww == \h\b\m\r\a\1\o\b\9\b\o\3\4\4\v\i\8\b\s\t\e\9\o\r\l\w\w\q\8\1\0\o\2\e\z\3\k\2\e\n\z\l\c\k\u\k\p\8\y\7\4\k\k\e\c\2\s\j\5\s\4\c\w\w ]] 00:07:58.018 00:07:58.018 real 0m0.376s 00:07:58.018 user 0m0.169s 00:07:58.018 sys 0m0.089s 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.018 ************************************ 00:07:58.018 END TEST dd_flag_append_forced_aio 00:07:58.018 ************************************ 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:58.018 ************************************ 00:07:58.018 START TEST dd_flag_directory_forced_aio 00:07:58.018 ************************************ 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.018 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.018 [2024-11-26 16:13:23.541771] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:58.018 [2024-11-26 16:13:23.541882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72462 ] 00:07:58.278 [2024-11-26 16:13:23.689075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.278 [2024-11-26 16:13:23.709049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.278 [2024-11-26 16:13:23.737058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.278 [2024-11-26 16:13:23.751781] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.278 [2024-11-26 16:13:23.751834] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.278 [2024-11-26 16:13:23.751866] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.278 [2024-11-26 16:13:23.811920] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.278 16:13:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.278 [2024-11-26 16:13:23.896848] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:58.278 [2024-11-26 16:13:23.897078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72466 ] 00:07:58.538 [2024-11-26 16:13:24.035072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.538 [2024-11-26 16:13:24.054703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.538 [2024-11-26 16:13:24.081016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.538 [2024-11-26 16:13:24.095761] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.538 [2024-11-26 16:13:24.095809] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.538 [2024-11-26 16:13:24.095841] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.538 [2024-11-26 16:13:24.151884] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:58.797 ************************************ 00:07:58.797 END TEST dd_flag_directory_forced_aio 00:07:58.797 ************************************ 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:58.797 00:07:58.797 real 0m0.722s 00:07:58.797 user 0m0.343s 00:07:58.797 sys 0m0.169s 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:58.797 ************************************ 00:07:58.797 START TEST dd_flag_nofollow_forced_aio 00:07:58.797 ************************************ 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:58.797 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.798 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.798 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.798 [2024-11-26 16:13:24.321615] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:58.798 [2024-11-26 16:13:24.321890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72493 ] 00:07:59.057 [2024-11-26 16:13:24.465376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.057 [2024-11-26 16:13:24.484074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.057 [2024-11-26 16:13:24.513814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.057 [2024-11-26 16:13:24.531415] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:59.057 [2024-11-26 16:13:24.531501] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:59.057 [2024-11-26 16:13:24.531537] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.057 [2024-11-26 16:13:24.598398] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.057 16:13:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:59.317 [2024-11-26 16:13:24.715885] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:59.317 [2024-11-26 16:13:24.716306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72504 ] 00:07:59.317 [2024-11-26 16:13:24.861135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.317 [2024-11-26 16:13:24.880989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.317 [2024-11-26 16:13:24.908740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.317 [2024-11-26 16:13:24.923585] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:59.317 [2024-11-26 16:13:24.923638] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:59.317 [2024-11-26 16:13:24.923671] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.577 [2024-11-26 16:13:24.985825] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:59.577 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:07:59.577 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.577 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:07:59.577 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:59.577 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:59.577 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.577 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:59.577 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:59.577 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:59.577 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.577 [2024-11-26 16:13:25.113300] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:07:59.577 [2024-11-26 16:13:25.113429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72506 ] 00:07:59.836 [2024-11-26 16:13:25.256534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.836 [2024-11-26 16:13:25.275857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.836 [2024-11-26 16:13:25.303983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.836  [2024-11-26T16:13:25.489Z] Copying: 512/512 [B] (average 500 kBps) 00:07:59.836 00:07:59.836 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ ybzmzexj3hbzypg40xyw290fuljaktwdqizdr4ki9u1fp7gx791r7v03ox6sih8etumm1l7xinr7g61vot1hrivo3lby28o7drn2qqmjv0fsrq797ddeed1buwcf5rv1i1okqun7c2c24l2q1bb5hqd668pnrh1fvl36ksmkqkfwircbuwoh350oqcn4lf2yb8ynk9gy76l46t38nvm2su8tfrdta1l9tkupougvvljwxql688i29fc951xtlp7agmzqum6kyvn2sbfv9z2jcjeym8e19w1c1f6v3vxmind20sny5lk1z6jw39wbwm4lu9w98ubism5m2rh5na3tfefsixn029s2qx8jn2alcgz36spy0a0rch2pqjwkj75qkld56o1l0dg2istefmegswps5ax22h7vm98zw36y0krzifovox4iyd0ursproa6ftlfrmio3s5hkfm1duo3g1vc9kkk1lhpqze4hvra8o0ol6gtmuxk1uzsfsxk5bfyb == \y\b\z\m\z\e\x\j\3\h\b\z\y\p\g\4\0\x\y\w\2\9\0\f\u\l\j\a\k\t\w\d\q\i\z\d\r\4\k\i\9\u\1\f\p\7\g\x\7\9\1\r\7\v\0\3\o\x\6\s\i\h\8\e\t\u\m\m\1\l\7\x\i\n\r\7\g\6\1\v\o\t\1\h\r\i\v\o\3\l\b\y\2\8\o\7\d\r\n\2\q\q\m\j\v\0\f\s\r\q\7\9\7\d\d\e\e\d\1\b\u\w\c\f\5\r\v\1\i\1\o\k\q\u\n\7\c\2\c\2\4\l\2\q\1\b\b\5\h\q\d\6\6\8\p\n\r\h\1\f\v\l\3\6\k\s\m\k\q\k\f\w\i\r\c\b\u\w\o\h\3\5\0\o\q\c\n\4\l\f\2\y\b\8\y\n\k\9\g\y\7\6\l\4\6\t\3\8\n\v\m\2\s\u\8\t\f\r\d\t\a\1\l\9\t\k\u\p\o\u\g\v\v\l\j\w\x\q\l\6\8\8\i\2\9\f\c\9\5\1\x\t\l\p\7\a\g\m\z\q\u\m\6\k\y\v\n\2\s\b\f\v\9\z\2\j\c\j\e\y\m\8\e\1\9\w\1\c\1\f\6\v\3\v\x\m\i\n\d\2\0\s\n\y\5\l\k\1\z\6\j\w\3\9\w\b\w\m\4\l\u\9\w\9\8\u\b\i\s\m\5\m\2\r\h\5\n\a\3\t\f\e\f\s\i\x\n\0\2\9\s\2\q\x\8\j\n\2\a\l\c\g\z\3\6\s\p\y\0\a\0\r\c\h\2\p\q\j\w\k\j\7\5\q\k\l\d\5\6\o\1\l\0\d\g\2\i\s\t\e\f\m\e\g\s\w\p\s\5\a\x\2\2\h\7\v\m\9\8\z\w\3\6\y\0\k\r\z\i\f\o\v\o\x\4\i\y\d\0\u\r\s\p\r\o\a\6\f\t\l\f\r\m\i\o\3\s\5\h\k\f\m\1\d\u\o\3\g\1\v\c\9\k\k\k\1\l\h\p\q\z\e\4\h\v\r\a\8\o\0\o\l\6\g\t\m\u\x\k\1\u\z\s\f\s\x\k\5\b\f\y\b ]] 00:07:59.836 00:07:59.836 real 0m1.213s 00:07:59.836 user 0m0.588s 00:07:59.836 sys 0m0.283s 00:07:59.836 ************************************ 00:07:59.836 END TEST dd_flag_nofollow_forced_aio 00:07:59.836 ************************************ 00:07:59.836 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.836 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:00.096 16:13:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:00.096 16:13:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.096 16:13:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.096 16:13:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:00.096 ************************************ 00:08:00.096 START TEST dd_flag_noatime_forced_aio 00:08:00.096 ************************************ 00:08:00.096 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:00.096 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:00.096 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:00.096 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:00.096 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:00.096 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:00.096 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:00.096 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732637605 00:08:00.096 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.096 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732637605 00:08:00.096 16:13:25 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:01.034 16:13:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.034 [2024-11-26 16:13:26.609792] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:01.034 [2024-11-26 16:13:26.610144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72552 ] 00:08:01.293 [2024-11-26 16:13:26.763133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.294 [2024-11-26 16:13:26.787232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.294 [2024-11-26 16:13:26.820513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.294  [2024-11-26T16:13:27.206Z] Copying: 512/512 [B] (average 500 kBps) 00:08:01.553 00:08:01.553 16:13:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:01.553 16:13:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732637605 )) 00:08:01.553 16:13:26 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.553 16:13:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732637605 )) 00:08:01.553 16:13:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.553 [2024-11-26 16:13:27.058633] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:01.553 [2024-11-26 16:13:27.058742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72558 ] 00:08:01.812 [2024-11-26 16:13:27.211638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.812 [2024-11-26 16:13:27.236646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.812 [2024-11-26 16:13:27.270006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.812  [2024-11-26T16:13:27.465Z] Copying: 512/512 [B] (average 500 kBps) 00:08:01.812 00:08:01.812 16:13:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:01.812 ************************************ 00:08:01.812 END TEST dd_flag_noatime_forced_aio 00:08:01.812 ************************************ 00:08:01.812 16:13:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732637607 )) 00:08:01.812 00:08:01.812 real 0m1.902s 00:08:01.812 user 0m0.444s 00:08:01.812 sys 0m0.215s 00:08:01.812 16:13:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.812 16:13:27 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:02.072 16:13:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:02.072 16:13:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.072 16:13:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.072 16:13:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:02.072 ************************************ 00:08:02.072 START TEST dd_flags_misc_forced_aio 00:08:02.072 ************************************ 00:08:02.072 16:13:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:02.072 16:13:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:02.072 16:13:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:02.072 16:13:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:02.072 16:13:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:02.072 16:13:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:02.072 16:13:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:02.072 16:13:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:02.072 16:13:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.072 16:13:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:02.072 [2024-11-26 16:13:27.541302] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:02.072 [2024-11-26 16:13:27.541580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72590 ] 00:08:02.072 [2024-11-26 16:13:27.689220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.072 [2024-11-26 16:13:27.711931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.331 [2024-11-26 16:13:27.741088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.331  [2024-11-26T16:13:27.984Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.331 00:08:02.331 16:13:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zhlu85n21neswowtww0uadm5u18zub1he3dgcnirjji83xu9z4m9iw6tsryf9y7kly709m6orx76q00av9413gdilbeg69ce8912xd7qzcw62pfa2oyobunmd77dd4t5dopibs5nxez1sh6z97sk8sw1zhvusjpvjlhf6a25vg8ctpuk5kscaiygxu1x3jgdvpjbh58o2n2xxpw7gvzutbz930ff2rtfw3wiodyfsscdbkqtk5p15yh80u6ijrokwoedcfxh0tb7zcxqlvhj6jxefuts1qzosf7el9ayzvcvo6fn7l576o2a7irbcyvrfmublziy6b62oyfbimpb1na4nifzeqdzvxnb121cyqdsqye6q8r4n0jyi3unene2i6xqilhzib9ga4pez5cr59g1ixjjn3yunl8tyx6mtchlp90p7pqo6s09m8wju3czev4jig72adq5ktwudm5zxu9fiijq6uf8jymrhho29brw3y2ztqycruijt7b2ei0c == \z\h\l\u\8\5\n\2\1\n\e\s\w\o\w\t\w\w\0\u\a\d\m\5\u\1\8\z\u\b\1\h\e\3\d\g\c\n\i\r\j\j\i\8\3\x\u\9\z\4\m\9\i\w\6\t\s\r\y\f\9\y\7\k\l\y\7\0\9\m\6\o\r\x\7\6\q\0\0\a\v\9\4\1\3\g\d\i\l\b\e\g\6\9\c\e\8\9\1\2\x\d\7\q\z\c\w\6\2\p\f\a\2\o\y\o\b\u\n\m\d\7\7\d\d\4\t\5\d\o\p\i\b\s\5\n\x\e\z\1\s\h\6\z\9\7\s\k\8\s\w\1\z\h\v\u\s\j\p\v\j\l\h\f\6\a\2\5\v\g\8\c\t\p\u\k\5\k\s\c\a\i\y\g\x\u\1\x\3\j\g\d\v\p\j\b\h\5\8\o\2\n\2\x\x\p\w\7\g\v\z\u\t\b\z\9\3\0\f\f\2\r\t\f\w\3\w\i\o\d\y\f\s\s\c\d\b\k\q\t\k\5\p\1\5\y\h\8\0\u\6\i\j\r\o\k\w\o\e\d\c\f\x\h\0\t\b\7\z\c\x\q\l\v\h\j\6\j\x\e\f\u\t\s\1\q\z\o\s\f\7\e\l\9\a\y\z\v\c\v\o\6\f\n\7\l\5\7\6\o\2\a\7\i\r\b\c\y\v\r\f\m\u\b\l\z\i\y\6\b\6\2\o\y\f\b\i\m\p\b\1\n\a\4\n\i\f\z\e\q\d\z\v\x\n\b\1\2\1\c\y\q\d\s\q\y\e\6\q\8\r\4\n\0\j\y\i\3\u\n\e\n\e\2\i\6\x\q\i\l\h\z\i\b\9\g\a\4\p\e\z\5\c\r\5\9\g\1\i\x\j\j\n\3\y\u\n\l\8\t\y\x\6\m\t\c\h\l\p\9\0\p\7\p\q\o\6\s\0\9\m\8\w\j\u\3\c\z\e\v\4\j\i\g\7\2\a\d\q\5\k\t\w\u\d\m\5\z\x\u\9\f\i\i\j\q\6\u\f\8\j\y\m\r\h\h\o\2\9\b\r\w\3\y\2\z\t\q\y\c\r\u\i\j\t\7\b\2\e\i\0\c ]] 00:08:02.331 16:13:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.331 16:13:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:02.331 [2024-11-26 16:13:27.936982] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:02.331 [2024-11-26 16:13:27.937088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72592 ] 00:08:02.590 [2024-11-26 16:13:28.078975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.590 [2024-11-26 16:13:28.101311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.590 [2024-11-26 16:13:28.131542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.590  [2024-11-26T16:13:28.503Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.850 00:08:02.850 16:13:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zhlu85n21neswowtww0uadm5u18zub1he3dgcnirjji83xu9z4m9iw6tsryf9y7kly709m6orx76q00av9413gdilbeg69ce8912xd7qzcw62pfa2oyobunmd77dd4t5dopibs5nxez1sh6z97sk8sw1zhvusjpvjlhf6a25vg8ctpuk5kscaiygxu1x3jgdvpjbh58o2n2xxpw7gvzutbz930ff2rtfw3wiodyfsscdbkqtk5p15yh80u6ijrokwoedcfxh0tb7zcxqlvhj6jxefuts1qzosf7el9ayzvcvo6fn7l576o2a7irbcyvrfmublziy6b62oyfbimpb1na4nifzeqdzvxnb121cyqdsqye6q8r4n0jyi3unene2i6xqilhzib9ga4pez5cr59g1ixjjn3yunl8tyx6mtchlp90p7pqo6s09m8wju3czev4jig72adq5ktwudm5zxu9fiijq6uf8jymrhho29brw3y2ztqycruijt7b2ei0c == \z\h\l\u\8\5\n\2\1\n\e\s\w\o\w\t\w\w\0\u\a\d\m\5\u\1\8\z\u\b\1\h\e\3\d\g\c\n\i\r\j\j\i\8\3\x\u\9\z\4\m\9\i\w\6\t\s\r\y\f\9\y\7\k\l\y\7\0\9\m\6\o\r\x\7\6\q\0\0\a\v\9\4\1\3\g\d\i\l\b\e\g\6\9\c\e\8\9\1\2\x\d\7\q\z\c\w\6\2\p\f\a\2\o\y\o\b\u\n\m\d\7\7\d\d\4\t\5\d\o\p\i\b\s\5\n\x\e\z\1\s\h\6\z\9\7\s\k\8\s\w\1\z\h\v\u\s\j\p\v\j\l\h\f\6\a\2\5\v\g\8\c\t\p\u\k\5\k\s\c\a\i\y\g\x\u\1\x\3\j\g\d\v\p\j\b\h\5\8\o\2\n\2\x\x\p\w\7\g\v\z\u\t\b\z\9\3\0\f\f\2\r\t\f\w\3\w\i\o\d\y\f\s\s\c\d\b\k\q\t\k\5\p\1\5\y\h\8\0\u\6\i\j\r\o\k\w\o\e\d\c\f\x\h\0\t\b\7\z\c\x\q\l\v\h\j\6\j\x\e\f\u\t\s\1\q\z\o\s\f\7\e\l\9\a\y\z\v\c\v\o\6\f\n\7\l\5\7\6\o\2\a\7\i\r\b\c\y\v\r\f\m\u\b\l\z\i\y\6\b\6\2\o\y\f\b\i\m\p\b\1\n\a\4\n\i\f\z\e\q\d\z\v\x\n\b\1\2\1\c\y\q\d\s\q\y\e\6\q\8\r\4\n\0\j\y\i\3\u\n\e\n\e\2\i\6\x\q\i\l\h\z\i\b\9\g\a\4\p\e\z\5\c\r\5\9\g\1\i\x\j\j\n\3\y\u\n\l\8\t\y\x\6\m\t\c\h\l\p\9\0\p\7\p\q\o\6\s\0\9\m\8\w\j\u\3\c\z\e\v\4\j\i\g\7\2\a\d\q\5\k\t\w\u\d\m\5\z\x\u\9\f\i\i\j\q\6\u\f\8\j\y\m\r\h\h\o\2\9\b\r\w\3\y\2\z\t\q\y\c\r\u\i\j\t\7\b\2\e\i\0\c ]] 00:08:02.850 16:13:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.850 16:13:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:02.850 [2024-11-26 16:13:28.330523] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:02.850 [2024-11-26 16:13:28.330625] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72594 ] 00:08:02.850 [2024-11-26 16:13:28.476676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.109 [2024-11-26 16:13:28.498643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.109 [2024-11-26 16:13:28.527652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.109  [2024-11-26T16:13:28.762Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.109 00:08:03.110 16:13:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zhlu85n21neswowtww0uadm5u18zub1he3dgcnirjji83xu9z4m9iw6tsryf9y7kly709m6orx76q00av9413gdilbeg69ce8912xd7qzcw62pfa2oyobunmd77dd4t5dopibs5nxez1sh6z97sk8sw1zhvusjpvjlhf6a25vg8ctpuk5kscaiygxu1x3jgdvpjbh58o2n2xxpw7gvzutbz930ff2rtfw3wiodyfsscdbkqtk5p15yh80u6ijrokwoedcfxh0tb7zcxqlvhj6jxefuts1qzosf7el9ayzvcvo6fn7l576o2a7irbcyvrfmublziy6b62oyfbimpb1na4nifzeqdzvxnb121cyqdsqye6q8r4n0jyi3unene2i6xqilhzib9ga4pez5cr59g1ixjjn3yunl8tyx6mtchlp90p7pqo6s09m8wju3czev4jig72adq5ktwudm5zxu9fiijq6uf8jymrhho29brw3y2ztqycruijt7b2ei0c == \z\h\l\u\8\5\n\2\1\n\e\s\w\o\w\t\w\w\0\u\a\d\m\5\u\1\8\z\u\b\1\h\e\3\d\g\c\n\i\r\j\j\i\8\3\x\u\9\z\4\m\9\i\w\6\t\s\r\y\f\9\y\7\k\l\y\7\0\9\m\6\o\r\x\7\6\q\0\0\a\v\9\4\1\3\g\d\i\l\b\e\g\6\9\c\e\8\9\1\2\x\d\7\q\z\c\w\6\2\p\f\a\2\o\y\o\b\u\n\m\d\7\7\d\d\4\t\5\d\o\p\i\b\s\5\n\x\e\z\1\s\h\6\z\9\7\s\k\8\s\w\1\z\h\v\u\s\j\p\v\j\l\h\f\6\a\2\5\v\g\8\c\t\p\u\k\5\k\s\c\a\i\y\g\x\u\1\x\3\j\g\d\v\p\j\b\h\5\8\o\2\n\2\x\x\p\w\7\g\v\z\u\t\b\z\9\3\0\f\f\2\r\t\f\w\3\w\i\o\d\y\f\s\s\c\d\b\k\q\t\k\5\p\1\5\y\h\8\0\u\6\i\j\r\o\k\w\o\e\d\c\f\x\h\0\t\b\7\z\c\x\q\l\v\h\j\6\j\x\e\f\u\t\s\1\q\z\o\s\f\7\e\l\9\a\y\z\v\c\v\o\6\f\n\7\l\5\7\6\o\2\a\7\i\r\b\c\y\v\r\f\m\u\b\l\z\i\y\6\b\6\2\o\y\f\b\i\m\p\b\1\n\a\4\n\i\f\z\e\q\d\z\v\x\n\b\1\2\1\c\y\q\d\s\q\y\e\6\q\8\r\4\n\0\j\y\i\3\u\n\e\n\e\2\i\6\x\q\i\l\h\z\i\b\9\g\a\4\p\e\z\5\c\r\5\9\g\1\i\x\j\j\n\3\y\u\n\l\8\t\y\x\6\m\t\c\h\l\p\9\0\p\7\p\q\o\6\s\0\9\m\8\w\j\u\3\c\z\e\v\4\j\i\g\7\2\a\d\q\5\k\t\w\u\d\m\5\z\x\u\9\f\i\i\j\q\6\u\f\8\j\y\m\r\h\h\o\2\9\b\r\w\3\y\2\z\t\q\y\c\r\u\i\j\t\7\b\2\e\i\0\c ]] 00:08:03.110 16:13:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.110 16:13:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:03.110 [2024-11-26 16:13:28.709942] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:03.110 [2024-11-26 16:13:28.710045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72607 ] 00:08:03.369 [2024-11-26 16:13:28.852260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.369 [2024-11-26 16:13:28.876624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.369 [2024-11-26 16:13:28.908837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.369  [2024-11-26T16:13:29.300Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.647 00:08:03.647 16:13:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zhlu85n21neswowtww0uadm5u18zub1he3dgcnirjji83xu9z4m9iw6tsryf9y7kly709m6orx76q00av9413gdilbeg69ce8912xd7qzcw62pfa2oyobunmd77dd4t5dopibs5nxez1sh6z97sk8sw1zhvusjpvjlhf6a25vg8ctpuk5kscaiygxu1x3jgdvpjbh58o2n2xxpw7gvzutbz930ff2rtfw3wiodyfsscdbkqtk5p15yh80u6ijrokwoedcfxh0tb7zcxqlvhj6jxefuts1qzosf7el9ayzvcvo6fn7l576o2a7irbcyvrfmublziy6b62oyfbimpb1na4nifzeqdzvxnb121cyqdsqye6q8r4n0jyi3unene2i6xqilhzib9ga4pez5cr59g1ixjjn3yunl8tyx6mtchlp90p7pqo6s09m8wju3czev4jig72adq5ktwudm5zxu9fiijq6uf8jymrhho29brw3y2ztqycruijt7b2ei0c == \z\h\l\u\8\5\n\2\1\n\e\s\w\o\w\t\w\w\0\u\a\d\m\5\u\1\8\z\u\b\1\h\e\3\d\g\c\n\i\r\j\j\i\8\3\x\u\9\z\4\m\9\i\w\6\t\s\r\y\f\9\y\7\k\l\y\7\0\9\m\6\o\r\x\7\6\q\0\0\a\v\9\4\1\3\g\d\i\l\b\e\g\6\9\c\e\8\9\1\2\x\d\7\q\z\c\w\6\2\p\f\a\2\o\y\o\b\u\n\m\d\7\7\d\d\4\t\5\d\o\p\i\b\s\5\n\x\e\z\1\s\h\6\z\9\7\s\k\8\s\w\1\z\h\v\u\s\j\p\v\j\l\h\f\6\a\2\5\v\g\8\c\t\p\u\k\5\k\s\c\a\i\y\g\x\u\1\x\3\j\g\d\v\p\j\b\h\5\8\o\2\n\2\x\x\p\w\7\g\v\z\u\t\b\z\9\3\0\f\f\2\r\t\f\w\3\w\i\o\d\y\f\s\s\c\d\b\k\q\t\k\5\p\1\5\y\h\8\0\u\6\i\j\r\o\k\w\o\e\d\c\f\x\h\0\t\b\7\z\c\x\q\l\v\h\j\6\j\x\e\f\u\t\s\1\q\z\o\s\f\7\e\l\9\a\y\z\v\c\v\o\6\f\n\7\l\5\7\6\o\2\a\7\i\r\b\c\y\v\r\f\m\u\b\l\z\i\y\6\b\6\2\o\y\f\b\i\m\p\b\1\n\a\4\n\i\f\z\e\q\d\z\v\x\n\b\1\2\1\c\y\q\d\s\q\y\e\6\q\8\r\4\n\0\j\y\i\3\u\n\e\n\e\2\i\6\x\q\i\l\h\z\i\b\9\g\a\4\p\e\z\5\c\r\5\9\g\1\i\x\j\j\n\3\y\u\n\l\8\t\y\x\6\m\t\c\h\l\p\9\0\p\7\p\q\o\6\s\0\9\m\8\w\j\u\3\c\z\e\v\4\j\i\g\7\2\a\d\q\5\k\t\w\u\d\m\5\z\x\u\9\f\i\i\j\q\6\u\f\8\j\y\m\r\h\h\o\2\9\b\r\w\3\y\2\z\t\q\y\c\r\u\i\j\t\7\b\2\e\i\0\c ]] 00:08:03.647 16:13:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:03.647 16:13:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:03.647 16:13:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:03.647 16:13:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:03.647 16:13:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.647 16:13:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:03.647 [2024-11-26 16:13:29.114207] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:03.647 [2024-11-26 16:13:29.114322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72609 ] 00:08:03.647 [2024-11-26 16:13:29.253191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.648 [2024-11-26 16:13:29.274272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.911 [2024-11-26 16:13:29.303847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.911  [2024-11-26T16:13:29.564Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.911 00:08:03.911 16:13:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6mm7obas30t3rnywwen71ybljv2oyhi9i79gdt3rsmwi2b572qalicv71opm7qw171rynh4cxabbpmatpsrwxydp0ocablbvsws02dowvy4lqxov0pcmwc0n4gb48kadw7adaby6d5emls6t91munusuou9zhac47fxggxqo6i1049tntsvhv2fd4mhavldrqfltynmk9tkci2nurbqm8kh8ap1upukbo3c15f0cc7ax35y0y942nwr5fptqico8qdcgq4sioscwpwkrxe3ce5cdhb4a81yvt8xr1i0uxge0hy8yw3omapnv0me690k8vc3ywy2je6m89ymqc2cflwcz999fcle3xczxta02hy205944jnyvqd6pxhe4fp2nhcf8x8smz894yxbx8woaj1gvtcgeza6mbreqpt5u5t7gcmnnboxfzjvwtsxdut6mvi4ub82i3zn8hiyu115tg57q6t9ak920rj4b8uf48to73sncn7p7rksytfkhuqn3 == \6\m\m\7\o\b\a\s\3\0\t\3\r\n\y\w\w\e\n\7\1\y\b\l\j\v\2\o\y\h\i\9\i\7\9\g\d\t\3\r\s\m\w\i\2\b\5\7\2\q\a\l\i\c\v\7\1\o\p\m\7\q\w\1\7\1\r\y\n\h\4\c\x\a\b\b\p\m\a\t\p\s\r\w\x\y\d\p\0\o\c\a\b\l\b\v\s\w\s\0\2\d\o\w\v\y\4\l\q\x\o\v\0\p\c\m\w\c\0\n\4\g\b\4\8\k\a\d\w\7\a\d\a\b\y\6\d\5\e\m\l\s\6\t\9\1\m\u\n\u\s\u\o\u\9\z\h\a\c\4\7\f\x\g\g\x\q\o\6\i\1\0\4\9\t\n\t\s\v\h\v\2\f\d\4\m\h\a\v\l\d\r\q\f\l\t\y\n\m\k\9\t\k\c\i\2\n\u\r\b\q\m\8\k\h\8\a\p\1\u\p\u\k\b\o\3\c\1\5\f\0\c\c\7\a\x\3\5\y\0\y\9\4\2\n\w\r\5\f\p\t\q\i\c\o\8\q\d\c\g\q\4\s\i\o\s\c\w\p\w\k\r\x\e\3\c\e\5\c\d\h\b\4\a\8\1\y\v\t\8\x\r\1\i\0\u\x\g\e\0\h\y\8\y\w\3\o\m\a\p\n\v\0\m\e\6\9\0\k\8\v\c\3\y\w\y\2\j\e\6\m\8\9\y\m\q\c\2\c\f\l\w\c\z\9\9\9\f\c\l\e\3\x\c\z\x\t\a\0\2\h\y\2\0\5\9\4\4\j\n\y\v\q\d\6\p\x\h\e\4\f\p\2\n\h\c\f\8\x\8\s\m\z\8\9\4\y\x\b\x\8\w\o\a\j\1\g\v\t\c\g\e\z\a\6\m\b\r\e\q\p\t\5\u\5\t\7\g\c\m\n\n\b\o\x\f\z\j\v\w\t\s\x\d\u\t\6\m\v\i\4\u\b\8\2\i\3\z\n\8\h\i\y\u\1\1\5\t\g\5\7\q\6\t\9\a\k\9\2\0\r\j\4\b\8\u\f\4\8\t\o\7\3\s\n\c\n\7\p\7\r\k\s\y\t\f\k\h\u\q\n\3 ]] 00:08:03.911 16:13:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.911 16:13:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:03.911 [2024-11-26 16:13:29.492293] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:03.911 [2024-11-26 16:13:29.492528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72614 ] 00:08:04.170 [2024-11-26 16:13:29.634964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.170 [2024-11-26 16:13:29.659595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.170 [2024-11-26 16:13:29.692598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.170  [2024-11-26T16:13:30.082Z] Copying: 512/512 [B] (average 500 kBps) 00:08:04.429 00:08:04.429 16:13:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6mm7obas30t3rnywwen71ybljv2oyhi9i79gdt3rsmwi2b572qalicv71opm7qw171rynh4cxabbpmatpsrwxydp0ocablbvsws02dowvy4lqxov0pcmwc0n4gb48kadw7adaby6d5emls6t91munusuou9zhac47fxggxqo6i1049tntsvhv2fd4mhavldrqfltynmk9tkci2nurbqm8kh8ap1upukbo3c15f0cc7ax35y0y942nwr5fptqico8qdcgq4sioscwpwkrxe3ce5cdhb4a81yvt8xr1i0uxge0hy8yw3omapnv0me690k8vc3ywy2je6m89ymqc2cflwcz999fcle3xczxta02hy205944jnyvqd6pxhe4fp2nhcf8x8smz894yxbx8woaj1gvtcgeza6mbreqpt5u5t7gcmnnboxfzjvwtsxdut6mvi4ub82i3zn8hiyu115tg57q6t9ak920rj4b8uf48to73sncn7p7rksytfkhuqn3 == \6\m\m\7\o\b\a\s\3\0\t\3\r\n\y\w\w\e\n\7\1\y\b\l\j\v\2\o\y\h\i\9\i\7\9\g\d\t\3\r\s\m\w\i\2\b\5\7\2\q\a\l\i\c\v\7\1\o\p\m\7\q\w\1\7\1\r\y\n\h\4\c\x\a\b\b\p\m\a\t\p\s\r\w\x\y\d\p\0\o\c\a\b\l\b\v\s\w\s\0\2\d\o\w\v\y\4\l\q\x\o\v\0\p\c\m\w\c\0\n\4\g\b\4\8\k\a\d\w\7\a\d\a\b\y\6\d\5\e\m\l\s\6\t\9\1\m\u\n\u\s\u\o\u\9\z\h\a\c\4\7\f\x\g\g\x\q\o\6\i\1\0\4\9\t\n\t\s\v\h\v\2\f\d\4\m\h\a\v\l\d\r\q\f\l\t\y\n\m\k\9\t\k\c\i\2\n\u\r\b\q\m\8\k\h\8\a\p\1\u\p\u\k\b\o\3\c\1\5\f\0\c\c\7\a\x\3\5\y\0\y\9\4\2\n\w\r\5\f\p\t\q\i\c\o\8\q\d\c\g\q\4\s\i\o\s\c\w\p\w\k\r\x\e\3\c\e\5\c\d\h\b\4\a\8\1\y\v\t\8\x\r\1\i\0\u\x\g\e\0\h\y\8\y\w\3\o\m\a\p\n\v\0\m\e\6\9\0\k\8\v\c\3\y\w\y\2\j\e\6\m\8\9\y\m\q\c\2\c\f\l\w\c\z\9\9\9\f\c\l\e\3\x\c\z\x\t\a\0\2\h\y\2\0\5\9\4\4\j\n\y\v\q\d\6\p\x\h\e\4\f\p\2\n\h\c\f\8\x\8\s\m\z\8\9\4\y\x\b\x\8\w\o\a\j\1\g\v\t\c\g\e\z\a\6\m\b\r\e\q\p\t\5\u\5\t\7\g\c\m\n\n\b\o\x\f\z\j\v\w\t\s\x\d\u\t\6\m\v\i\4\u\b\8\2\i\3\z\n\8\h\i\y\u\1\1\5\t\g\5\7\q\6\t\9\a\k\9\2\0\r\j\4\b\8\u\f\4\8\t\o\7\3\s\n\c\n\7\p\7\r\k\s\y\t\f\k\h\u\q\n\3 ]] 00:08:04.429 16:13:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:04.430 16:13:29 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:04.430 [2024-11-26 16:13:29.896236] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:04.430 [2024-11-26 16:13:29.896506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72627 ] 00:08:04.430 [2024-11-26 16:13:30.042961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.430 [2024-11-26 16:13:30.066720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.689 [2024-11-26 16:13:30.098193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.689  [2024-11-26T16:13:30.342Z] Copying: 512/512 [B] (average 500 kBps) 00:08:04.689 00:08:04.689 16:13:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6mm7obas30t3rnywwen71ybljv2oyhi9i79gdt3rsmwi2b572qalicv71opm7qw171rynh4cxabbpmatpsrwxydp0ocablbvsws02dowvy4lqxov0pcmwc0n4gb48kadw7adaby6d5emls6t91munusuou9zhac47fxggxqo6i1049tntsvhv2fd4mhavldrqfltynmk9tkci2nurbqm8kh8ap1upukbo3c15f0cc7ax35y0y942nwr5fptqico8qdcgq4sioscwpwkrxe3ce5cdhb4a81yvt8xr1i0uxge0hy8yw3omapnv0me690k8vc3ywy2je6m89ymqc2cflwcz999fcle3xczxta02hy205944jnyvqd6pxhe4fp2nhcf8x8smz894yxbx8woaj1gvtcgeza6mbreqpt5u5t7gcmnnboxfzjvwtsxdut6mvi4ub82i3zn8hiyu115tg57q6t9ak920rj4b8uf48to73sncn7p7rksytfkhuqn3 == \6\m\m\7\o\b\a\s\3\0\t\3\r\n\y\w\w\e\n\7\1\y\b\l\j\v\2\o\y\h\i\9\i\7\9\g\d\t\3\r\s\m\w\i\2\b\5\7\2\q\a\l\i\c\v\7\1\o\p\m\7\q\w\1\7\1\r\y\n\h\4\c\x\a\b\b\p\m\a\t\p\s\r\w\x\y\d\p\0\o\c\a\b\l\b\v\s\w\s\0\2\d\o\w\v\y\4\l\q\x\o\v\0\p\c\m\w\c\0\n\4\g\b\4\8\k\a\d\w\7\a\d\a\b\y\6\d\5\e\m\l\s\6\t\9\1\m\u\n\u\s\u\o\u\9\z\h\a\c\4\7\f\x\g\g\x\q\o\6\i\1\0\4\9\t\n\t\s\v\h\v\2\f\d\4\m\h\a\v\l\d\r\q\f\l\t\y\n\m\k\9\t\k\c\i\2\n\u\r\b\q\m\8\k\h\8\a\p\1\u\p\u\k\b\o\3\c\1\5\f\0\c\c\7\a\x\3\5\y\0\y\9\4\2\n\w\r\5\f\p\t\q\i\c\o\8\q\d\c\g\q\4\s\i\o\s\c\w\p\w\k\r\x\e\3\c\e\5\c\d\h\b\4\a\8\1\y\v\t\8\x\r\1\i\0\u\x\g\e\0\h\y\8\y\w\3\o\m\a\p\n\v\0\m\e\6\9\0\k\8\v\c\3\y\w\y\2\j\e\6\m\8\9\y\m\q\c\2\c\f\l\w\c\z\9\9\9\f\c\l\e\3\x\c\z\x\t\a\0\2\h\y\2\0\5\9\4\4\j\n\y\v\q\d\6\p\x\h\e\4\f\p\2\n\h\c\f\8\x\8\s\m\z\8\9\4\y\x\b\x\8\w\o\a\j\1\g\v\t\c\g\e\z\a\6\m\b\r\e\q\p\t\5\u\5\t\7\g\c\m\n\n\b\o\x\f\z\j\v\w\t\s\x\d\u\t\6\m\v\i\4\u\b\8\2\i\3\z\n\8\h\i\y\u\1\1\5\t\g\5\7\q\6\t\9\a\k\9\2\0\r\j\4\b\8\u\f\4\8\t\o\7\3\s\n\c\n\7\p\7\r\k\s\y\t\f\k\h\u\q\n\3 ]] 00:08:04.689 16:13:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:04.689 16:13:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:04.689 [2024-11-26 16:13:30.284706] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:04.689 [2024-11-26 16:13:30.284804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72629 ] 00:08:04.948 [2024-11-26 16:13:30.429373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.948 [2024-11-26 16:13:30.450173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.948 [2024-11-26 16:13:30.479168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.948  [2024-11-26T16:13:30.861Z] Copying: 512/512 [B] (average 500 kBps) 00:08:05.208 00:08:05.208 16:13:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6mm7obas30t3rnywwen71ybljv2oyhi9i79gdt3rsmwi2b572qalicv71opm7qw171rynh4cxabbpmatpsrwxydp0ocablbvsws02dowvy4lqxov0pcmwc0n4gb48kadw7adaby6d5emls6t91munusuou9zhac47fxggxqo6i1049tntsvhv2fd4mhavldrqfltynmk9tkci2nurbqm8kh8ap1upukbo3c15f0cc7ax35y0y942nwr5fptqico8qdcgq4sioscwpwkrxe3ce5cdhb4a81yvt8xr1i0uxge0hy8yw3omapnv0me690k8vc3ywy2je6m89ymqc2cflwcz999fcle3xczxta02hy205944jnyvqd6pxhe4fp2nhcf8x8smz894yxbx8woaj1gvtcgeza6mbreqpt5u5t7gcmnnboxfzjvwtsxdut6mvi4ub82i3zn8hiyu115tg57q6t9ak920rj4b8uf48to73sncn7p7rksytfkhuqn3 == \6\m\m\7\o\b\a\s\3\0\t\3\r\n\y\w\w\e\n\7\1\y\b\l\j\v\2\o\y\h\i\9\i\7\9\g\d\t\3\r\s\m\w\i\2\b\5\7\2\q\a\l\i\c\v\7\1\o\p\m\7\q\w\1\7\1\r\y\n\h\4\c\x\a\b\b\p\m\a\t\p\s\r\w\x\y\d\p\0\o\c\a\b\l\b\v\s\w\s\0\2\d\o\w\v\y\4\l\q\x\o\v\0\p\c\m\w\c\0\n\4\g\b\4\8\k\a\d\w\7\a\d\a\b\y\6\d\5\e\m\l\s\6\t\9\1\m\u\n\u\s\u\o\u\9\z\h\a\c\4\7\f\x\g\g\x\q\o\6\i\1\0\4\9\t\n\t\s\v\h\v\2\f\d\4\m\h\a\v\l\d\r\q\f\l\t\y\n\m\k\9\t\k\c\i\2\n\u\r\b\q\m\8\k\h\8\a\p\1\u\p\u\k\b\o\3\c\1\5\f\0\c\c\7\a\x\3\5\y\0\y\9\4\2\n\w\r\5\f\p\t\q\i\c\o\8\q\d\c\g\q\4\s\i\o\s\c\w\p\w\k\r\x\e\3\c\e\5\c\d\h\b\4\a\8\1\y\v\t\8\x\r\1\i\0\u\x\g\e\0\h\y\8\y\w\3\o\m\a\p\n\v\0\m\e\6\9\0\k\8\v\c\3\y\w\y\2\j\e\6\m\8\9\y\m\q\c\2\c\f\l\w\c\z\9\9\9\f\c\l\e\3\x\c\z\x\t\a\0\2\h\y\2\0\5\9\4\4\j\n\y\v\q\d\6\p\x\h\e\4\f\p\2\n\h\c\f\8\x\8\s\m\z\8\9\4\y\x\b\x\8\w\o\a\j\1\g\v\t\c\g\e\z\a\6\m\b\r\e\q\p\t\5\u\5\t\7\g\c\m\n\n\b\o\x\f\z\j\v\w\t\s\x\d\u\t\6\m\v\i\4\u\b\8\2\i\3\z\n\8\h\i\y\u\1\1\5\t\g\5\7\q\6\t\9\a\k\9\2\0\r\j\4\b\8\u\f\4\8\t\o\7\3\s\n\c\n\7\p\7\r\k\s\y\t\f\k\h\u\q\n\3 ]] 00:08:05.208 00:08:05.208 real 0m3.162s 00:08:05.208 user 0m1.472s 00:08:05.208 sys 0m0.725s 00:08:05.208 16:13:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.208 16:13:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:05.208 ************************************ 00:08:05.208 END TEST dd_flags_misc_forced_aio 00:08:05.208 ************************************ 00:08:05.208 16:13:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:05.208 16:13:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:05.208 16:13:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:05.208 ************************************ 00:08:05.208 END TEST spdk_dd_posix 00:08:05.208 ************************************ 00:08:05.208 00:08:05.208 real 0m15.235s 00:08:05.208 user 0m6.225s 00:08:05.208 sys 0m4.284s 00:08:05.208 16:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.208 16:13:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:05.208 16:13:30 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:05.208 16:13:30 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.208 16:13:30 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.208 16:13:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:05.208 ************************************ 00:08:05.208 START TEST spdk_dd_malloc 00:08:05.208 ************************************ 00:08:05.208 16:13:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:05.208 * Looking for test storage... 00:08:05.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:05.208 16:13:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:05.208 16:13:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:05.208 16:13:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:05.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.468 --rc genhtml_branch_coverage=1 00:08:05.468 --rc genhtml_function_coverage=1 00:08:05.468 --rc genhtml_legend=1 00:08:05.468 --rc geninfo_all_blocks=1 00:08:05.468 --rc geninfo_unexecuted_blocks=1 00:08:05.468 00:08:05.468 ' 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:05.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.468 --rc genhtml_branch_coverage=1 00:08:05.468 --rc genhtml_function_coverage=1 00:08:05.468 --rc genhtml_legend=1 00:08:05.468 --rc geninfo_all_blocks=1 00:08:05.468 --rc geninfo_unexecuted_blocks=1 00:08:05.468 00:08:05.468 ' 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:05.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.468 --rc genhtml_branch_coverage=1 00:08:05.468 --rc genhtml_function_coverage=1 00:08:05.468 --rc genhtml_legend=1 00:08:05.468 --rc geninfo_all_blocks=1 00:08:05.468 --rc geninfo_unexecuted_blocks=1 00:08:05.468 00:08:05.468 ' 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:05.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.468 --rc genhtml_branch_coverage=1 00:08:05.468 --rc genhtml_function_coverage=1 00:08:05.468 --rc genhtml_legend=1 00:08:05.468 --rc geninfo_all_blocks=1 00:08:05.468 --rc geninfo_unexecuted_blocks=1 00:08:05.468 00:08:05.468 ' 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:05.468 ************************************ 00:08:05.468 START TEST dd_malloc_copy 00:08:05.468 ************************************ 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:05.468 16:13:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:05.468 [2024-11-26 16:13:30.967528] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:05.468 [2024-11-26 16:13:30.967793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72711 ] 00:08:05.468 { 00:08:05.468 "subsystems": [ 00:08:05.468 { 00:08:05.468 "subsystem": "bdev", 00:08:05.468 "config": [ 00:08:05.468 { 00:08:05.468 "params": { 00:08:05.468 "block_size": 512, 00:08:05.468 "num_blocks": 1048576, 00:08:05.468 "name": "malloc0" 00:08:05.468 }, 00:08:05.468 "method": "bdev_malloc_create" 00:08:05.468 }, 00:08:05.468 { 00:08:05.468 "params": { 00:08:05.468 "block_size": 512, 00:08:05.468 "num_blocks": 1048576, 00:08:05.468 "name": "malloc1" 00:08:05.468 }, 00:08:05.468 "method": "bdev_malloc_create" 00:08:05.468 }, 00:08:05.468 { 00:08:05.468 "method": "bdev_wait_for_examine" 00:08:05.468 } 00:08:05.468 ] 00:08:05.468 } 00:08:05.468 ] 00:08:05.468 } 00:08:05.468 [2024-11-26 16:13:31.109198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.728 [2024-11-26 16:13:31.131729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.728 [2024-11-26 16:13:31.164843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.133  [2024-11-26T16:13:33.354Z] Copying: 238/512 [MB] (238 MBps) [2024-11-26T16:13:33.613Z] Copying: 460/512 [MB] (221 MBps) [2024-11-26T16:13:34.182Z] Copying: 512/512 [MB] (average 229 MBps) 00:08:08.529 00:08:08.529 16:13:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:08.529 16:13:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:08.529 16:13:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:08.529 16:13:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:08.529 [2024-11-26 16:13:33.933089] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:08.529 [2024-11-26 16:13:33.933184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72753 ] 00:08:08.529 { 00:08:08.529 "subsystems": [ 00:08:08.529 { 00:08:08.529 "subsystem": "bdev", 00:08:08.529 "config": [ 00:08:08.529 { 00:08:08.529 "params": { 00:08:08.529 "block_size": 512, 00:08:08.529 "num_blocks": 1048576, 00:08:08.529 "name": "malloc0" 00:08:08.529 }, 00:08:08.529 "method": "bdev_malloc_create" 00:08:08.529 }, 00:08:08.529 { 00:08:08.529 "params": { 00:08:08.529 "block_size": 512, 00:08:08.529 "num_blocks": 1048576, 00:08:08.529 "name": "malloc1" 00:08:08.529 }, 00:08:08.529 "method": "bdev_malloc_create" 00:08:08.529 }, 00:08:08.529 { 00:08:08.529 "method": "bdev_wait_for_examine" 00:08:08.529 } 00:08:08.529 ] 00:08:08.529 } 00:08:08.529 ] 00:08:08.529 } 00:08:08.529 [2024-11-26 16:13:34.079273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.529 [2024-11-26 16:13:34.103303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.529 [2024-11-26 16:13:34.137444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.908  [2024-11-26T16:13:36.498Z] Copying: 212/512 [MB] (212 MBps) [2024-11-26T16:13:36.758Z] Copying: 425/512 [MB] (212 MBps) [2024-11-26T16:13:37.327Z] Copying: 512/512 [MB] (average 216 MBps) 00:08:11.674 00:08:11.674 ************************************ 00:08:11.674 END TEST dd_malloc_copy 00:08:11.674 ************************************ 00:08:11.674 00:08:11.674 real 0m6.094s 00:08:11.674 user 0m5.469s 00:08:11.674 sys 0m0.479s 00:08:11.674 16:13:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.674 16:13:37 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:11.674 ************************************ 00:08:11.674 END TEST spdk_dd_malloc 00:08:11.674 ************************************ 00:08:11.674 00:08:11.674 real 0m6.324s 00:08:11.674 user 0m5.601s 00:08:11.674 sys 0m0.577s 00:08:11.674 16:13:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.674 16:13:37 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:11.674 16:13:37 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:11.674 16:13:37 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:11.674 16:13:37 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.674 16:13:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:11.674 ************************************ 00:08:11.674 START TEST spdk_dd_bdev_to_bdev 00:08:11.674 ************************************ 00:08:11.674 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:11.674 * Looking for test storage... 00:08:11.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:11.674 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:11.674 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:08:11.674 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.674 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.674 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.674 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.674 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.674 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.674 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.674 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.674 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.674 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.674 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.674 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.674 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.675 --rc genhtml_branch_coverage=1 00:08:11.675 --rc genhtml_function_coverage=1 00:08:11.675 --rc genhtml_legend=1 00:08:11.675 --rc geninfo_all_blocks=1 00:08:11.675 --rc geninfo_unexecuted_blocks=1 00:08:11.675 00:08:11.675 ' 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.675 --rc genhtml_branch_coverage=1 00:08:11.675 --rc genhtml_function_coverage=1 00:08:11.675 --rc genhtml_legend=1 00:08:11.675 --rc geninfo_all_blocks=1 00:08:11.675 --rc geninfo_unexecuted_blocks=1 00:08:11.675 00:08:11.675 ' 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.675 --rc genhtml_branch_coverage=1 00:08:11.675 --rc genhtml_function_coverage=1 00:08:11.675 --rc genhtml_legend=1 00:08:11.675 --rc geninfo_all_blocks=1 00:08:11.675 --rc geninfo_unexecuted_blocks=1 00:08:11.675 00:08:11.675 ' 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.675 --rc genhtml_branch_coverage=1 00:08:11.675 --rc genhtml_function_coverage=1 00:08:11.675 --rc genhtml_legend=1 00:08:11.675 --rc geninfo_all_blocks=1 00:08:11.675 --rc geninfo_unexecuted_blocks=1 00:08:11.675 00:08:11.675 ' 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:11.675 ************************************ 00:08:11.675 START TEST dd_inflate_file 00:08:11.675 ************************************ 00:08:11.675 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:11.935 [2024-11-26 16:13:37.347533] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:11.935 [2024-11-26 16:13:37.347640] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72860 ] 00:08:11.935 [2024-11-26 16:13:37.491565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.935 [2024-11-26 16:13:37.510386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.935 [2024-11-26 16:13:37.537085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.195  [2024-11-26T16:13:37.848Z] Copying: 64/64 [MB] (average 1560 MBps) 00:08:12.195 00:08:12.195 00:08:12.195 real 0m0.398s 00:08:12.195 user 0m0.203s 00:08:12.195 sys 0m0.216s 00:08:12.195 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.195 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:12.195 ************************************ 00:08:12.195 END TEST dd_inflate_file 00:08:12.195 ************************************ 00:08:12.195 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:12.195 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:12.195 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:12.195 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:12.195 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:12.195 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:12.195 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:12.195 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.195 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:12.195 ************************************ 00:08:12.195 START TEST dd_copy_to_out_bdev 00:08:12.195 ************************************ 00:08:12.195 16:13:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:12.195 [2024-11-26 16:13:37.798443] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:12.195 [2024-11-26 16:13:37.798509] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72896 ] 00:08:12.195 { 00:08:12.195 "subsystems": [ 00:08:12.195 { 00:08:12.195 "subsystem": "bdev", 00:08:12.195 "config": [ 00:08:12.195 { 00:08:12.195 "params": { 00:08:12.195 "trtype": "pcie", 00:08:12.195 "traddr": "0000:00:10.0", 00:08:12.195 "name": "Nvme0" 00:08:12.195 }, 00:08:12.195 "method": "bdev_nvme_attach_controller" 00:08:12.195 }, 00:08:12.195 { 00:08:12.195 "params": { 00:08:12.195 "trtype": "pcie", 00:08:12.195 "traddr": "0000:00:11.0", 00:08:12.195 "name": "Nvme1" 00:08:12.195 }, 00:08:12.195 "method": "bdev_nvme_attach_controller" 00:08:12.195 }, 00:08:12.195 { 00:08:12.195 "method": "bdev_wait_for_examine" 00:08:12.195 } 00:08:12.195 ] 00:08:12.195 } 00:08:12.195 ] 00:08:12.195 } 00:08:12.454 [2024-11-26 16:13:37.936099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.454 [2024-11-26 16:13:37.954201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.454 [2024-11-26 16:13:37.981114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.831  [2024-11-26T16:13:39.484Z] Copying: 52/64 [MB] (52 MBps) [2024-11-26T16:13:39.743Z] Copying: 64/64 [MB] (average 51 MBps) 00:08:14.090 00:08:14.090 00:08:14.090 real 0m1.772s 00:08:14.090 user 0m1.611s 00:08:14.090 sys 0m1.459s 00:08:14.090 16:13:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.090 ************************************ 00:08:14.090 END TEST dd_copy_to_out_bdev 00:08:14.090 ************************************ 00:08:14.090 16:13:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:14.090 16:13:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:14.090 16:13:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:14.090 16:13:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.090 16:13:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.090 16:13:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:14.090 ************************************ 00:08:14.090 START TEST dd_offset_magic 00:08:14.090 ************************************ 00:08:14.090 16:13:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:08:14.090 16:13:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:14.090 16:13:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:14.090 16:13:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:14.090 16:13:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:14.090 16:13:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:14.090 16:13:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:14.090 16:13:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:14.090 16:13:39 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:14.090 [2024-11-26 16:13:39.642057] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:14.090 [2024-11-26 16:13:39.642164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72941 ] 00:08:14.090 { 00:08:14.090 "subsystems": [ 00:08:14.090 { 00:08:14.090 "subsystem": "bdev", 00:08:14.090 "config": [ 00:08:14.090 { 00:08:14.090 "params": { 00:08:14.090 "trtype": "pcie", 00:08:14.090 "traddr": "0000:00:10.0", 00:08:14.090 "name": "Nvme0" 00:08:14.090 }, 00:08:14.090 "method": "bdev_nvme_attach_controller" 00:08:14.090 }, 00:08:14.090 { 00:08:14.091 "params": { 00:08:14.091 "trtype": "pcie", 00:08:14.091 "traddr": "0000:00:11.0", 00:08:14.091 "name": "Nvme1" 00:08:14.091 }, 00:08:14.091 "method": "bdev_nvme_attach_controller" 00:08:14.091 }, 00:08:14.091 { 00:08:14.091 "method": "bdev_wait_for_examine" 00:08:14.091 } 00:08:14.091 ] 00:08:14.091 } 00:08:14.091 ] 00:08:14.091 } 00:08:14.350 [2024-11-26 16:13:39.782888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.351 [2024-11-26 16:13:39.801433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.351 [2024-11-26 16:13:39.828893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.609  [2024-11-26T16:13:40.262Z] Copying: 65/65 [MB] (average 1031 MBps) 00:08:14.609 00:08:14.609 16:13:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:14.609 16:13:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:14.609 16:13:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:14.609 16:13:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:14.609 [2024-11-26 16:13:40.242266] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:14.609 [2024-11-26 16:13:40.242392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72961 ] 00:08:14.609 { 00:08:14.609 "subsystems": [ 00:08:14.609 { 00:08:14.609 "subsystem": "bdev", 00:08:14.610 "config": [ 00:08:14.610 { 00:08:14.610 "params": { 00:08:14.610 "trtype": "pcie", 00:08:14.610 "traddr": "0000:00:10.0", 00:08:14.610 "name": "Nvme0" 00:08:14.610 }, 00:08:14.610 "method": "bdev_nvme_attach_controller" 00:08:14.610 }, 00:08:14.610 { 00:08:14.610 "params": { 00:08:14.610 "trtype": "pcie", 00:08:14.610 "traddr": "0000:00:11.0", 00:08:14.610 "name": "Nvme1" 00:08:14.610 }, 00:08:14.610 "method": "bdev_nvme_attach_controller" 00:08:14.610 }, 00:08:14.610 { 00:08:14.610 "method": "bdev_wait_for_examine" 00:08:14.610 } 00:08:14.610 ] 00:08:14.610 } 00:08:14.610 ] 00:08:14.610 } 00:08:14.868 [2024-11-26 16:13:40.387696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.868 [2024-11-26 16:13:40.405257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.868 [2024-11-26 16:13:40.432321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.127  [2024-11-26T16:13:40.780Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:15.127 00:08:15.127 16:13:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:15.127 16:13:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:15.127 16:13:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:15.127 16:13:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:15.127 16:13:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:15.127 16:13:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:15.127 16:13:40 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:15.127 [2024-11-26 16:13:40.761988] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:15.127 [2024-11-26 16:13:40.762243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72972 ] 00:08:15.127 { 00:08:15.127 "subsystems": [ 00:08:15.127 { 00:08:15.127 "subsystem": "bdev", 00:08:15.127 "config": [ 00:08:15.127 { 00:08:15.127 "params": { 00:08:15.127 "trtype": "pcie", 00:08:15.127 "traddr": "0000:00:10.0", 00:08:15.127 "name": "Nvme0" 00:08:15.127 }, 00:08:15.127 "method": "bdev_nvme_attach_controller" 00:08:15.127 }, 00:08:15.127 { 00:08:15.127 "params": { 00:08:15.127 "trtype": "pcie", 00:08:15.127 "traddr": "0000:00:11.0", 00:08:15.127 "name": "Nvme1" 00:08:15.127 }, 00:08:15.127 "method": "bdev_nvme_attach_controller" 00:08:15.127 }, 00:08:15.127 { 00:08:15.127 "method": "bdev_wait_for_examine" 00:08:15.127 } 00:08:15.127 ] 00:08:15.127 } 00:08:15.127 ] 00:08:15.127 } 00:08:15.387 [2024-11-26 16:13:40.910195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.387 [2024-11-26 16:13:40.929090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.387 [2024-11-26 16:13:40.955966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.647  [2024-11-26T16:13:41.560Z] Copying: 65/65 [MB] (average 1120 MBps) 00:08:15.907 00:08:15.907 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:15.907 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:15.907 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:15.907 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:15.907 [2024-11-26 16:13:41.381051] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:15.907 [2024-11-26 16:13:41.381293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72992 ] 00:08:15.907 { 00:08:15.907 "subsystems": [ 00:08:15.907 { 00:08:15.907 "subsystem": "bdev", 00:08:15.907 "config": [ 00:08:15.907 { 00:08:15.907 "params": { 00:08:15.907 "trtype": "pcie", 00:08:15.907 "traddr": "0000:00:10.0", 00:08:15.907 "name": "Nvme0" 00:08:15.907 }, 00:08:15.907 "method": "bdev_nvme_attach_controller" 00:08:15.907 }, 00:08:15.907 { 00:08:15.907 "params": { 00:08:15.907 "trtype": "pcie", 00:08:15.907 "traddr": "0000:00:11.0", 00:08:15.907 "name": "Nvme1" 00:08:15.907 }, 00:08:15.907 "method": "bdev_nvme_attach_controller" 00:08:15.907 }, 00:08:15.907 { 00:08:15.907 "method": "bdev_wait_for_examine" 00:08:15.907 } 00:08:15.907 ] 00:08:15.907 } 00:08:15.907 ] 00:08:15.907 } 00:08:15.907 [2024-11-26 16:13:41.525616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.907 [2024-11-26 16:13:41.543832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.166 [2024-11-26 16:13:41.571843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.166  [2024-11-26T16:13:42.078Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:16.425 00:08:16.425 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:16.425 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:16.425 00:08:16.425 real 0m2.257s 00:08:16.425 user 0m1.658s 00:08:16.425 sys 0m0.599s 00:08:16.425 ************************************ 00:08:16.425 END TEST dd_offset_magic 00:08:16.425 ************************************ 00:08:16.425 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.425 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:16.425 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:16.425 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:16.425 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:16.425 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:16.425 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:16.425 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:16.425 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:16.425 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:16.425 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:16.425 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:16.426 16:13:41 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:16.426 { 00:08:16.426 "subsystems": [ 00:08:16.426 { 00:08:16.426 "subsystem": "bdev", 00:08:16.426 "config": [ 00:08:16.426 { 00:08:16.426 "params": { 00:08:16.426 "trtype": "pcie", 00:08:16.426 "traddr": "0000:00:10.0", 00:08:16.426 "name": "Nvme0" 00:08:16.426 }, 00:08:16.426 "method": "bdev_nvme_attach_controller" 00:08:16.426 }, 00:08:16.426 { 00:08:16.426 "params": { 00:08:16.426 "trtype": "pcie", 00:08:16.426 "traddr": "0000:00:11.0", 00:08:16.426 "name": "Nvme1" 00:08:16.426 }, 00:08:16.426 "method": "bdev_nvme_attach_controller" 00:08:16.426 }, 00:08:16.426 { 00:08:16.426 "method": "bdev_wait_for_examine" 00:08:16.426 } 00:08:16.426 ] 00:08:16.426 } 00:08:16.426 ] 00:08:16.426 } 00:08:16.426 [2024-11-26 16:13:41.935417] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:16.426 [2024-11-26 16:13:41.935511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73024 ] 00:08:16.685 [2024-11-26 16:13:42.080517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.685 [2024-11-26 16:13:42.100830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.685 [2024-11-26 16:13:42.131567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.685  [2024-11-26T16:13:42.596Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:16.943 00:08:16.943 16:13:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:16.943 16:13:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:16.943 16:13:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:16.943 16:13:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:16.944 16:13:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:16.944 16:13:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:16.944 16:13:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:16.944 16:13:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:16.944 16:13:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:16.944 16:13:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:16.944 [2024-11-26 16:13:42.442455] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:16.944 [2024-11-26 16:13:42.442681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73039 ] 00:08:16.944 { 00:08:16.944 "subsystems": [ 00:08:16.944 { 00:08:16.944 "subsystem": "bdev", 00:08:16.944 "config": [ 00:08:16.944 { 00:08:16.944 "params": { 00:08:16.944 "trtype": "pcie", 00:08:16.944 "traddr": "0000:00:10.0", 00:08:16.944 "name": "Nvme0" 00:08:16.944 }, 00:08:16.944 "method": "bdev_nvme_attach_controller" 00:08:16.944 }, 00:08:16.944 { 00:08:16.944 "params": { 00:08:16.944 "trtype": "pcie", 00:08:16.944 "traddr": "0000:00:11.0", 00:08:16.944 "name": "Nvme1" 00:08:16.944 }, 00:08:16.944 "method": "bdev_nvme_attach_controller" 00:08:16.944 }, 00:08:16.944 { 00:08:16.944 "method": "bdev_wait_for_examine" 00:08:16.944 } 00:08:16.944 ] 00:08:16.944 } 00:08:16.944 ] 00:08:16.944 } 00:08:16.944 [2024-11-26 16:13:42.579108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.203 [2024-11-26 16:13:42.597640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.203 [2024-11-26 16:13:42.624304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.203  [2024-11-26T16:13:43.115Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:08:17.462 00:08:17.462 16:13:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:17.462 ************************************ 00:08:17.462 END TEST spdk_dd_bdev_to_bdev 00:08:17.462 ************************************ 00:08:17.462 00:08:17.462 real 0m5.806s 00:08:17.462 user 0m4.342s 00:08:17.462 sys 0m2.801s 00:08:17.462 16:13:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.462 16:13:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:17.462 16:13:42 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:17.462 16:13:42 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:17.462 16:13:42 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.462 16:13:42 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.462 16:13:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:17.462 ************************************ 00:08:17.462 START TEST spdk_dd_uring 00:08:17.462 ************************************ 00:08:17.462 16:13:42 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:17.462 * Looking for test storage... 00:08:17.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:17.462 16:13:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:17.462 16:13:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:08:17.462 16:13:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:17.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.722 --rc genhtml_branch_coverage=1 00:08:17.722 --rc genhtml_function_coverage=1 00:08:17.722 --rc genhtml_legend=1 00:08:17.722 --rc geninfo_all_blocks=1 00:08:17.722 --rc geninfo_unexecuted_blocks=1 00:08:17.722 00:08:17.722 ' 00:08:17.722 16:13:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:17.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.723 --rc genhtml_branch_coverage=1 00:08:17.723 --rc genhtml_function_coverage=1 00:08:17.723 --rc genhtml_legend=1 00:08:17.723 --rc geninfo_all_blocks=1 00:08:17.723 --rc geninfo_unexecuted_blocks=1 00:08:17.723 00:08:17.723 ' 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:17.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.723 --rc genhtml_branch_coverage=1 00:08:17.723 --rc genhtml_function_coverage=1 00:08:17.723 --rc genhtml_legend=1 00:08:17.723 --rc geninfo_all_blocks=1 00:08:17.723 --rc geninfo_unexecuted_blocks=1 00:08:17.723 00:08:17.723 ' 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:17.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.723 --rc genhtml_branch_coverage=1 00:08:17.723 --rc genhtml_function_coverage=1 00:08:17.723 --rc genhtml_legend=1 00:08:17.723 --rc geninfo_all_blocks=1 00:08:17.723 --rc geninfo_unexecuted_blocks=1 00:08:17.723 00:08:17.723 ' 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:17.723 ************************************ 00:08:17.723 START TEST dd_uring_copy 00:08:17.723 ************************************ 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=knu1c97z2eml98qeyzf6bvj9tgbv0k2lnw8qy9y71p4l45z4fg6hey0r05kagll1usetxf0cqozwgtmnjivj4o8mpoq8e96zycv856txd3nlj2jd6u98mcw0tdmht133pmbcrmfrqdw7g3rtdblaqn2iccb9s8gccug229tz7hfpdi9yr12lhtbq2hiepca8rw3n2z36tjg3kpf6accsy4o6xbb4xqgjob5m8vfjqu05thnxm74t297undkag1pps0q6vsyfhf70arcdvoqrf17odod7wgitw370iy7yfthg3dy9m84nbzbxdv2gdaick8lhqex6aaleovxjhz14ym89qao2b0reymahajattj3idb2t85paq5awjo2vwr3xc1vh1jeobm7o1czx3so6j77nvoulg6obfnyni1rl9o0x679rl1efy1bsvox5kbe0id2opqk7h9zfpwpfh9489xf2p14fotvuk9g5oeobhx2vi0pix1id026bbzuj934okj6rikj8ho1j02fdutzrpss9pcl30l5sivw3tjbhdye7tbpsz3diplks8si47nef4rqkaw7jfvxjycruxv4u7sjygjnar42ss9q3qg5ja7jjqfwjow4z4ua4ol0kc63xyeqdyak2ty5xr4x5r83xp05vk7dxdntoeao6w0robft77mxdgbb27kpa7bniqn7vjs2srhieq6ma1niy6o4sipd5nw4w9ilw4uxjmi8dv52yzig3m1ee6pxxlrren1q6w4j8yhbdvy0qrhjb06uad6r661r35gqtktp1q12uw7m5pumxtap3u27wcskeru3ypi4jmfofpk6a8o2jkc80j6akibv1n2q7x8066wle5bh8p4rqv36pfidl0il6mal1j7ssdjabll8fduivu3a6wi515rqyv7nlbtab037g0r8or0fp2xlcr1za0bicun3b8ft1guo988bka62lzz0a1hfy91nxivwpqjl04mklf34h14mimd2ry4g0nk6nn6ti 00:08:17.723 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo knu1c97z2eml98qeyzf6bvj9tgbv0k2lnw8qy9y71p4l45z4fg6hey0r05kagll1usetxf0cqozwgtmnjivj4o8mpoq8e96zycv856txd3nlj2jd6u98mcw0tdmht133pmbcrmfrqdw7g3rtdblaqn2iccb9s8gccug229tz7hfpdi9yr12lhtbq2hiepca8rw3n2z36tjg3kpf6accsy4o6xbb4xqgjob5m8vfjqu05thnxm74t297undkag1pps0q6vsyfhf70arcdvoqrf17odod7wgitw370iy7yfthg3dy9m84nbzbxdv2gdaick8lhqex6aaleovxjhz14ym89qao2b0reymahajattj3idb2t85paq5awjo2vwr3xc1vh1jeobm7o1czx3so6j77nvoulg6obfnyni1rl9o0x679rl1efy1bsvox5kbe0id2opqk7h9zfpwpfh9489xf2p14fotvuk9g5oeobhx2vi0pix1id026bbzuj934okj6rikj8ho1j02fdutzrpss9pcl30l5sivw3tjbhdye7tbpsz3diplks8si47nef4rqkaw7jfvxjycruxv4u7sjygjnar42ss9q3qg5ja7jjqfwjow4z4ua4ol0kc63xyeqdyak2ty5xr4x5r83xp05vk7dxdntoeao6w0robft77mxdgbb27kpa7bniqn7vjs2srhieq6ma1niy6o4sipd5nw4w9ilw4uxjmi8dv52yzig3m1ee6pxxlrren1q6w4j8yhbdvy0qrhjb06uad6r661r35gqtktp1q12uw7m5pumxtap3u27wcskeru3ypi4jmfofpk6a8o2jkc80j6akibv1n2q7x8066wle5bh8p4rqv36pfidl0il6mal1j7ssdjabll8fduivu3a6wi515rqyv7nlbtab037g0r8or0fp2xlcr1za0bicun3b8ft1guo988bka62lzz0a1hfy91nxivwpqjl04mklf34h14mimd2ry4g0nk6nn6ti 00:08:17.724 16:13:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:17.724 [2024-11-26 16:13:43.247929] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:17.724 [2024-11-26 16:13:43.248190] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73117 ] 00:08:17.983 [2024-11-26 16:13:43.396048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.983 [2024-11-26 16:13:43.417094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.983 [2024-11-26 16:13:43.444333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.243  [2024-11-26T16:13:44.155Z] Copying: 511/511 [MB] (average 1646 MBps) 00:08:18.502 00:08:18.502 16:13:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:18.502 16:13:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:18.503 16:13:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:18.503 16:13:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:18.503 { 00:08:18.503 "subsystems": [ 00:08:18.503 { 00:08:18.503 "subsystem": "bdev", 00:08:18.503 "config": [ 00:08:18.503 { 00:08:18.503 "params": { 00:08:18.503 "block_size": 512, 00:08:18.503 "num_blocks": 1048576, 00:08:18.503 "name": "malloc0" 00:08:18.503 }, 00:08:18.503 "method": "bdev_malloc_create" 00:08:18.503 }, 00:08:18.503 { 00:08:18.503 "params": { 00:08:18.503 "filename": "/dev/zram1", 00:08:18.503 "name": "uring0" 00:08:18.503 }, 00:08:18.503 "method": "bdev_uring_create" 00:08:18.503 }, 00:08:18.503 { 00:08:18.503 "method": "bdev_wait_for_examine" 00:08:18.503 } 00:08:18.503 ] 00:08:18.503 } 00:08:18.503 ] 00:08:18.503 } 00:08:18.503 [2024-11-26 16:13:44.126302] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:18.503 [2024-11-26 16:13:44.126420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73128 ] 00:08:18.762 [2024-11-26 16:13:44.268644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.762 [2024-11-26 16:13:44.286260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.762 [2024-11-26 16:13:44.313113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.139  [2024-11-26T16:13:46.729Z] Copying: 252/512 [MB] (252 MBps) [2024-11-26T16:13:46.729Z] Copying: 499/512 [MB] (247 MBps) [2024-11-26T16:13:46.729Z] Copying: 512/512 [MB] (average 249 MBps) 00:08:21.076 00:08:21.076 16:13:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:21.076 16:13:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:21.076 16:13:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:21.076 16:13:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:21.076 [2024-11-26 16:13:46.714927] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:21.076 [2024-11-26 16:13:46.715013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73166 ] 00:08:21.335 { 00:08:21.335 "subsystems": [ 00:08:21.335 { 00:08:21.335 "subsystem": "bdev", 00:08:21.335 "config": [ 00:08:21.335 { 00:08:21.335 "params": { 00:08:21.335 "block_size": 512, 00:08:21.335 "num_blocks": 1048576, 00:08:21.335 "name": "malloc0" 00:08:21.335 }, 00:08:21.335 "method": "bdev_malloc_create" 00:08:21.335 }, 00:08:21.335 { 00:08:21.335 "params": { 00:08:21.335 "filename": "/dev/zram1", 00:08:21.335 "name": "uring0" 00:08:21.335 }, 00:08:21.335 "method": "bdev_uring_create" 00:08:21.335 }, 00:08:21.335 { 00:08:21.335 "method": "bdev_wait_for_examine" 00:08:21.335 } 00:08:21.335 ] 00:08:21.335 } 00:08:21.335 ] 00:08:21.335 } 00:08:21.335 [2024-11-26 16:13:46.852076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.335 [2024-11-26 16:13:46.871796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.335 [2024-11-26 16:13:46.902872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.712  [2024-11-26T16:13:49.300Z] Copying: 178/512 [MB] (178 MBps) [2024-11-26T16:13:50.236Z] Copying: 364/512 [MB] (186 MBps) [2024-11-26T16:13:50.236Z] Copying: 512/512 [MB] (average 174 MBps) 00:08:24.583 00:08:24.583 16:13:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:24.584 16:13:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ knu1c97z2eml98qeyzf6bvj9tgbv0k2lnw8qy9y71p4l45z4fg6hey0r05kagll1usetxf0cqozwgtmnjivj4o8mpoq8e96zycv856txd3nlj2jd6u98mcw0tdmht133pmbcrmfrqdw7g3rtdblaqn2iccb9s8gccug229tz7hfpdi9yr12lhtbq2hiepca8rw3n2z36tjg3kpf6accsy4o6xbb4xqgjob5m8vfjqu05thnxm74t297undkag1pps0q6vsyfhf70arcdvoqrf17odod7wgitw370iy7yfthg3dy9m84nbzbxdv2gdaick8lhqex6aaleovxjhz14ym89qao2b0reymahajattj3idb2t85paq5awjo2vwr3xc1vh1jeobm7o1czx3so6j77nvoulg6obfnyni1rl9o0x679rl1efy1bsvox5kbe0id2opqk7h9zfpwpfh9489xf2p14fotvuk9g5oeobhx2vi0pix1id026bbzuj934okj6rikj8ho1j02fdutzrpss9pcl30l5sivw3tjbhdye7tbpsz3diplks8si47nef4rqkaw7jfvxjycruxv4u7sjygjnar42ss9q3qg5ja7jjqfwjow4z4ua4ol0kc63xyeqdyak2ty5xr4x5r83xp05vk7dxdntoeao6w0robft77mxdgbb27kpa7bniqn7vjs2srhieq6ma1niy6o4sipd5nw4w9ilw4uxjmi8dv52yzig3m1ee6pxxlrren1q6w4j8yhbdvy0qrhjb06uad6r661r35gqtktp1q12uw7m5pumxtap3u27wcskeru3ypi4jmfofpk6a8o2jkc80j6akibv1n2q7x8066wle5bh8p4rqv36pfidl0il6mal1j7ssdjabll8fduivu3a6wi515rqyv7nlbtab037g0r8or0fp2xlcr1za0bicun3b8ft1guo988bka62lzz0a1hfy91nxivwpqjl04mklf34h14mimd2ry4g0nk6nn6ti == \k\n\u\1\c\9\7\z\2\e\m\l\9\8\q\e\y\z\f\6\b\v\j\9\t\g\b\v\0\k\2\l\n\w\8\q\y\9\y\7\1\p\4\l\4\5\z\4\f\g\6\h\e\y\0\r\0\5\k\a\g\l\l\1\u\s\e\t\x\f\0\c\q\o\z\w\g\t\m\n\j\i\v\j\4\o\8\m\p\o\q\8\e\9\6\z\y\c\v\8\5\6\t\x\d\3\n\l\j\2\j\d\6\u\9\8\m\c\w\0\t\d\m\h\t\1\3\3\p\m\b\c\r\m\f\r\q\d\w\7\g\3\r\t\d\b\l\a\q\n\2\i\c\c\b\9\s\8\g\c\c\u\g\2\2\9\t\z\7\h\f\p\d\i\9\y\r\1\2\l\h\t\b\q\2\h\i\e\p\c\a\8\r\w\3\n\2\z\3\6\t\j\g\3\k\p\f\6\a\c\c\s\y\4\o\6\x\b\b\4\x\q\g\j\o\b\5\m\8\v\f\j\q\u\0\5\t\h\n\x\m\7\4\t\2\9\7\u\n\d\k\a\g\1\p\p\s\0\q\6\v\s\y\f\h\f\7\0\a\r\c\d\v\o\q\r\f\1\7\o\d\o\d\7\w\g\i\t\w\3\7\0\i\y\7\y\f\t\h\g\3\d\y\9\m\8\4\n\b\z\b\x\d\v\2\g\d\a\i\c\k\8\l\h\q\e\x\6\a\a\l\e\o\v\x\j\h\z\1\4\y\m\8\9\q\a\o\2\b\0\r\e\y\m\a\h\a\j\a\t\t\j\3\i\d\b\2\t\8\5\p\a\q\5\a\w\j\o\2\v\w\r\3\x\c\1\v\h\1\j\e\o\b\m\7\o\1\c\z\x\3\s\o\6\j\7\7\n\v\o\u\l\g\6\o\b\f\n\y\n\i\1\r\l\9\o\0\x\6\7\9\r\l\1\e\f\y\1\b\s\v\o\x\5\k\b\e\0\i\d\2\o\p\q\k\7\h\9\z\f\p\w\p\f\h\9\4\8\9\x\f\2\p\1\4\f\o\t\v\u\k\9\g\5\o\e\o\b\h\x\2\v\i\0\p\i\x\1\i\d\0\2\6\b\b\z\u\j\9\3\4\o\k\j\6\r\i\k\j\8\h\o\1\j\0\2\f\d\u\t\z\r\p\s\s\9\p\c\l\3\0\l\5\s\i\v\w\3\t\j\b\h\d\y\e\7\t\b\p\s\z\3\d\i\p\l\k\s\8\s\i\4\7\n\e\f\4\r\q\k\a\w\7\j\f\v\x\j\y\c\r\u\x\v\4\u\7\s\j\y\g\j\n\a\r\4\2\s\s\9\q\3\q\g\5\j\a\7\j\j\q\f\w\j\o\w\4\z\4\u\a\4\o\l\0\k\c\6\3\x\y\e\q\d\y\a\k\2\t\y\5\x\r\4\x\5\r\8\3\x\p\0\5\v\k\7\d\x\d\n\t\o\e\a\o\6\w\0\r\o\b\f\t\7\7\m\x\d\g\b\b\2\7\k\p\a\7\b\n\i\q\n\7\v\j\s\2\s\r\h\i\e\q\6\m\a\1\n\i\y\6\o\4\s\i\p\d\5\n\w\4\w\9\i\l\w\4\u\x\j\m\i\8\d\v\5\2\y\z\i\g\3\m\1\e\e\6\p\x\x\l\r\r\e\n\1\q\6\w\4\j\8\y\h\b\d\v\y\0\q\r\h\j\b\0\6\u\a\d\6\r\6\6\1\r\3\5\g\q\t\k\t\p\1\q\1\2\u\w\7\m\5\p\u\m\x\t\a\p\3\u\2\7\w\c\s\k\e\r\u\3\y\p\i\4\j\m\f\o\f\p\k\6\a\8\o\2\j\k\c\8\0\j\6\a\k\i\b\v\1\n\2\q\7\x\8\0\6\6\w\l\e\5\b\h\8\p\4\r\q\v\3\6\p\f\i\d\l\0\i\l\6\m\a\l\1\j\7\s\s\d\j\a\b\l\l\8\f\d\u\i\v\u\3\a\6\w\i\5\1\5\r\q\y\v\7\n\l\b\t\a\b\0\3\7\g\0\r\8\o\r\0\f\p\2\x\l\c\r\1\z\a\0\b\i\c\u\n\3\b\8\f\t\1\g\u\o\9\8\8\b\k\a\6\2\l\z\z\0\a\1\h\f\y\9\1\n\x\i\v\w\p\q\j\l\0\4\m\k\l\f\3\4\h\1\4\m\i\m\d\2\r\y\4\g\0\n\k\6\n\n\6\t\i ]] 00:08:24.584 16:13:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:24.584 16:13:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ knu1c97z2eml98qeyzf6bvj9tgbv0k2lnw8qy9y71p4l45z4fg6hey0r05kagll1usetxf0cqozwgtmnjivj4o8mpoq8e96zycv856txd3nlj2jd6u98mcw0tdmht133pmbcrmfrqdw7g3rtdblaqn2iccb9s8gccug229tz7hfpdi9yr12lhtbq2hiepca8rw3n2z36tjg3kpf6accsy4o6xbb4xqgjob5m8vfjqu05thnxm74t297undkag1pps0q6vsyfhf70arcdvoqrf17odod7wgitw370iy7yfthg3dy9m84nbzbxdv2gdaick8lhqex6aaleovxjhz14ym89qao2b0reymahajattj3idb2t85paq5awjo2vwr3xc1vh1jeobm7o1czx3so6j77nvoulg6obfnyni1rl9o0x679rl1efy1bsvox5kbe0id2opqk7h9zfpwpfh9489xf2p14fotvuk9g5oeobhx2vi0pix1id026bbzuj934okj6rikj8ho1j02fdutzrpss9pcl30l5sivw3tjbhdye7tbpsz3diplks8si47nef4rqkaw7jfvxjycruxv4u7sjygjnar42ss9q3qg5ja7jjqfwjow4z4ua4ol0kc63xyeqdyak2ty5xr4x5r83xp05vk7dxdntoeao6w0robft77mxdgbb27kpa7bniqn7vjs2srhieq6ma1niy6o4sipd5nw4w9ilw4uxjmi8dv52yzig3m1ee6pxxlrren1q6w4j8yhbdvy0qrhjb06uad6r661r35gqtktp1q12uw7m5pumxtap3u27wcskeru3ypi4jmfofpk6a8o2jkc80j6akibv1n2q7x8066wle5bh8p4rqv36pfidl0il6mal1j7ssdjabll8fduivu3a6wi515rqyv7nlbtab037g0r8or0fp2xlcr1za0bicun3b8ft1guo988bka62lzz0a1hfy91nxivwpqjl04mklf34h14mimd2ry4g0nk6nn6ti == \k\n\u\1\c\9\7\z\2\e\m\l\9\8\q\e\y\z\f\6\b\v\j\9\t\g\b\v\0\k\2\l\n\w\8\q\y\9\y\7\1\p\4\l\4\5\z\4\f\g\6\h\e\y\0\r\0\5\k\a\g\l\l\1\u\s\e\t\x\f\0\c\q\o\z\w\g\t\m\n\j\i\v\j\4\o\8\m\p\o\q\8\e\9\6\z\y\c\v\8\5\6\t\x\d\3\n\l\j\2\j\d\6\u\9\8\m\c\w\0\t\d\m\h\t\1\3\3\p\m\b\c\r\m\f\r\q\d\w\7\g\3\r\t\d\b\l\a\q\n\2\i\c\c\b\9\s\8\g\c\c\u\g\2\2\9\t\z\7\h\f\p\d\i\9\y\r\1\2\l\h\t\b\q\2\h\i\e\p\c\a\8\r\w\3\n\2\z\3\6\t\j\g\3\k\p\f\6\a\c\c\s\y\4\o\6\x\b\b\4\x\q\g\j\o\b\5\m\8\v\f\j\q\u\0\5\t\h\n\x\m\7\4\t\2\9\7\u\n\d\k\a\g\1\p\p\s\0\q\6\v\s\y\f\h\f\7\0\a\r\c\d\v\o\q\r\f\1\7\o\d\o\d\7\w\g\i\t\w\3\7\0\i\y\7\y\f\t\h\g\3\d\y\9\m\8\4\n\b\z\b\x\d\v\2\g\d\a\i\c\k\8\l\h\q\e\x\6\a\a\l\e\o\v\x\j\h\z\1\4\y\m\8\9\q\a\o\2\b\0\r\e\y\m\a\h\a\j\a\t\t\j\3\i\d\b\2\t\8\5\p\a\q\5\a\w\j\o\2\v\w\r\3\x\c\1\v\h\1\j\e\o\b\m\7\o\1\c\z\x\3\s\o\6\j\7\7\n\v\o\u\l\g\6\o\b\f\n\y\n\i\1\r\l\9\o\0\x\6\7\9\r\l\1\e\f\y\1\b\s\v\o\x\5\k\b\e\0\i\d\2\o\p\q\k\7\h\9\z\f\p\w\p\f\h\9\4\8\9\x\f\2\p\1\4\f\o\t\v\u\k\9\g\5\o\e\o\b\h\x\2\v\i\0\p\i\x\1\i\d\0\2\6\b\b\z\u\j\9\3\4\o\k\j\6\r\i\k\j\8\h\o\1\j\0\2\f\d\u\t\z\r\p\s\s\9\p\c\l\3\0\l\5\s\i\v\w\3\t\j\b\h\d\y\e\7\t\b\p\s\z\3\d\i\p\l\k\s\8\s\i\4\7\n\e\f\4\r\q\k\a\w\7\j\f\v\x\j\y\c\r\u\x\v\4\u\7\s\j\y\g\j\n\a\r\4\2\s\s\9\q\3\q\g\5\j\a\7\j\j\q\f\w\j\o\w\4\z\4\u\a\4\o\l\0\k\c\6\3\x\y\e\q\d\y\a\k\2\t\y\5\x\r\4\x\5\r\8\3\x\p\0\5\v\k\7\d\x\d\n\t\o\e\a\o\6\w\0\r\o\b\f\t\7\7\m\x\d\g\b\b\2\7\k\p\a\7\b\n\i\q\n\7\v\j\s\2\s\r\h\i\e\q\6\m\a\1\n\i\y\6\o\4\s\i\p\d\5\n\w\4\w\9\i\l\w\4\u\x\j\m\i\8\d\v\5\2\y\z\i\g\3\m\1\e\e\6\p\x\x\l\r\r\e\n\1\q\6\w\4\j\8\y\h\b\d\v\y\0\q\r\h\j\b\0\6\u\a\d\6\r\6\6\1\r\3\5\g\q\t\k\t\p\1\q\1\2\u\w\7\m\5\p\u\m\x\t\a\p\3\u\2\7\w\c\s\k\e\r\u\3\y\p\i\4\j\m\f\o\f\p\k\6\a\8\o\2\j\k\c\8\0\j\6\a\k\i\b\v\1\n\2\q\7\x\8\0\6\6\w\l\e\5\b\h\8\p\4\r\q\v\3\6\p\f\i\d\l\0\i\l\6\m\a\l\1\j\7\s\s\d\j\a\b\l\l\8\f\d\u\i\v\u\3\a\6\w\i\5\1\5\r\q\y\v\7\n\l\b\t\a\b\0\3\7\g\0\r\8\o\r\0\f\p\2\x\l\c\r\1\z\a\0\b\i\c\u\n\3\b\8\f\t\1\g\u\o\9\8\8\b\k\a\6\2\l\z\z\0\a\1\h\f\y\9\1\n\x\i\v\w\p\q\j\l\0\4\m\k\l\f\3\4\h\1\4\m\i\m\d\2\r\y\4\g\0\n\k\6\n\n\6\t\i ]] 00:08:24.584 16:13:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:25.152 16:13:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:25.152 16:13:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:25.152 16:13:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:25.152 16:13:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:25.152 [2024-11-26 16:13:50.554350] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:25.152 [2024-11-26 16:13:50.554455] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73227 ] 00:08:25.152 { 00:08:25.152 "subsystems": [ 00:08:25.152 { 00:08:25.152 "subsystem": "bdev", 00:08:25.152 "config": [ 00:08:25.152 { 00:08:25.152 "params": { 00:08:25.152 "block_size": 512, 00:08:25.152 "num_blocks": 1048576, 00:08:25.152 "name": "malloc0" 00:08:25.152 }, 00:08:25.152 "method": "bdev_malloc_create" 00:08:25.152 }, 00:08:25.152 { 00:08:25.152 "params": { 00:08:25.152 "filename": "/dev/zram1", 00:08:25.152 "name": "uring0" 00:08:25.152 }, 00:08:25.152 "method": "bdev_uring_create" 00:08:25.152 }, 00:08:25.152 { 00:08:25.152 "method": "bdev_wait_for_examine" 00:08:25.152 } 00:08:25.152 ] 00:08:25.152 } 00:08:25.152 ] 00:08:25.152 } 00:08:25.152 [2024-11-26 16:13:50.684817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.152 [2024-11-26 16:13:50.702515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.152 [2024-11-26 16:13:50.730598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.604  [2024-11-26T16:13:53.194Z] Copying: 177/512 [MB] (177 MBps) [2024-11-26T16:13:53.763Z] Copying: 358/512 [MB] (181 MBps) [2024-11-26T16:13:54.023Z] Copying: 512/512 [MB] (average 179 MBps) 00:08:28.370 00:08:28.370 16:13:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:28.370 16:13:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:28.370 16:13:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:28.370 16:13:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:28.370 16:13:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:28.370 16:13:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:28.370 16:13:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:28.370 16:13:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:28.370 [2024-11-26 16:13:53.949111] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:28.370 [2024-11-26 16:13:53.949869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73273 ] 00:08:28.370 { 00:08:28.370 "subsystems": [ 00:08:28.370 { 00:08:28.370 "subsystem": "bdev", 00:08:28.370 "config": [ 00:08:28.370 { 00:08:28.370 "params": { 00:08:28.370 "block_size": 512, 00:08:28.370 "num_blocks": 1048576, 00:08:28.370 "name": "malloc0" 00:08:28.370 }, 00:08:28.370 "method": "bdev_malloc_create" 00:08:28.370 }, 00:08:28.370 { 00:08:28.370 "params": { 00:08:28.370 "filename": "/dev/zram1", 00:08:28.370 "name": "uring0" 00:08:28.370 }, 00:08:28.370 "method": "bdev_uring_create" 00:08:28.370 }, 00:08:28.370 { 00:08:28.370 "params": { 00:08:28.370 "name": "uring0" 00:08:28.370 }, 00:08:28.370 "method": "bdev_uring_delete" 00:08:28.370 }, 00:08:28.370 { 00:08:28.370 "method": "bdev_wait_for_examine" 00:08:28.370 } 00:08:28.370 ] 00:08:28.370 } 00:08:28.370 ] 00:08:28.370 } 00:08:28.630 [2024-11-26 16:13:54.090756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.630 [2024-11-26 16:13:54.108258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.630 [2024-11-26 16:13:54.134933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.630  [2024-11-26T16:13:54.543Z] Copying: 0/0 [B] (average 0 Bps) 00:08:28.890 00:08:28.890 16:13:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:28.890 16:13:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:28.890 16:13:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:28.890 16:13:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:08:28.890 16:13:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:28.890 16:13:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:28.890 16:13:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:28.890 16:13:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.890 16:13:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.890 16:13:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.890 16:13:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.890 16:13:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.890 16:13:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.890 16:13:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.890 16:13:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.890 16:13:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:28.890 [2024-11-26 16:13:54.521821] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:28.890 [2024-11-26 16:13:54.521916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73297 ] 00:08:28.890 { 00:08:28.890 "subsystems": [ 00:08:28.890 { 00:08:28.890 "subsystem": "bdev", 00:08:28.890 "config": [ 00:08:28.890 { 00:08:28.890 "params": { 00:08:28.890 "block_size": 512, 00:08:28.890 "num_blocks": 1048576, 00:08:28.890 "name": "malloc0" 00:08:28.890 }, 00:08:28.890 "method": "bdev_malloc_create" 00:08:28.890 }, 00:08:28.890 { 00:08:28.890 "params": { 00:08:28.890 "filename": "/dev/zram1", 00:08:28.890 "name": "uring0" 00:08:28.890 }, 00:08:28.890 "method": "bdev_uring_create" 00:08:28.890 }, 00:08:28.890 { 00:08:28.890 "params": { 00:08:28.890 "name": "uring0" 00:08:28.890 }, 00:08:28.890 "method": "bdev_uring_delete" 00:08:28.890 }, 00:08:28.890 { 00:08:28.890 "method": "bdev_wait_for_examine" 00:08:28.890 } 00:08:28.890 ] 00:08:28.890 } 00:08:28.890 ] 00:08:28.890 } 00:08:29.149 [2024-11-26 16:13:54.664257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.149 [2024-11-26 16:13:54.683901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.149 [2024-11-26 16:13:54.714187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.408 [2024-11-26 16:13:54.827581] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:29.408 [2024-11-26 16:13:54.827627] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:29.408 [2024-11-26 16:13:54.827654] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:29.408 [2024-11-26 16:13:54.827662] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.408 [2024-11-26 16:13:54.986393] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:29.408 16:13:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:08:29.408 16:13:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:29.408 16:13:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:08:29.408 16:13:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:08:29.408 16:13:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:08:29.408 16:13:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:29.408 16:13:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:29.408 16:13:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:29.408 16:13:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:29.408 16:13:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:29.408 16:13:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:29.408 16:13:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:29.666 ************************************ 00:08:29.666 END TEST dd_uring_copy 00:08:29.666 ************************************ 00:08:29.666 00:08:29.666 real 0m12.083s 00:08:29.666 user 0m8.274s 00:08:29.666 sys 0m10.501s 00:08:29.666 16:13:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.666 16:13:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:29.666 ************************************ 00:08:29.666 END TEST spdk_dd_uring 00:08:29.666 ************************************ 00:08:29.666 00:08:29.666 real 0m12.324s 00:08:29.666 user 0m8.419s 00:08:29.666 sys 0m10.599s 00:08:29.666 16:13:55 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.666 16:13:55 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:29.926 16:13:55 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:29.926 16:13:55 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.926 16:13:55 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.926 16:13:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:29.926 ************************************ 00:08:29.927 START TEST spdk_dd_sparse 00:08:29.927 ************************************ 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:29.927 * Looking for test storage... 00:08:29.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:29.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.927 --rc genhtml_branch_coverage=1 00:08:29.927 --rc genhtml_function_coverage=1 00:08:29.927 --rc genhtml_legend=1 00:08:29.927 --rc geninfo_all_blocks=1 00:08:29.927 --rc geninfo_unexecuted_blocks=1 00:08:29.927 00:08:29.927 ' 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:29.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.927 --rc genhtml_branch_coverage=1 00:08:29.927 --rc genhtml_function_coverage=1 00:08:29.927 --rc genhtml_legend=1 00:08:29.927 --rc geninfo_all_blocks=1 00:08:29.927 --rc geninfo_unexecuted_blocks=1 00:08:29.927 00:08:29.927 ' 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:29.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.927 --rc genhtml_branch_coverage=1 00:08:29.927 --rc genhtml_function_coverage=1 00:08:29.927 --rc genhtml_legend=1 00:08:29.927 --rc geninfo_all_blocks=1 00:08:29.927 --rc geninfo_unexecuted_blocks=1 00:08:29.927 00:08:29.927 ' 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:29.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.927 --rc genhtml_branch_coverage=1 00:08:29.927 --rc genhtml_function_coverage=1 00:08:29.927 --rc genhtml_legend=1 00:08:29.927 --rc geninfo_all_blocks=1 00:08:29.927 --rc geninfo_unexecuted_blocks=1 00:08:29.927 00:08:29.927 ' 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:29.927 1+0 records in 00:08:29.927 1+0 records out 00:08:29.927 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00684602 s, 613 MB/s 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:29.927 1+0 records in 00:08:29.927 1+0 records out 00:08:29.927 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00685165 s, 612 MB/s 00:08:29.927 16:13:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:30.187 1+0 records in 00:08:30.187 1+0 records out 00:08:30.187 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00656449 s, 639 MB/s 00:08:30.187 16:13:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:30.187 16:13:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.187 16:13:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.187 16:13:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:30.187 ************************************ 00:08:30.187 START TEST dd_sparse_file_to_file 00:08:30.187 ************************************ 00:08:30.187 16:13:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:08:30.187 16:13:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:30.187 16:13:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:30.187 16:13:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:30.187 16:13:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:30.187 16:13:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:30.187 16:13:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:30.187 16:13:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:30.187 16:13:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:30.187 16:13:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:30.187 16:13:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:30.187 [2024-11-26 16:13:55.650765] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:30.187 [2024-11-26 16:13:55.650861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73391 ] 00:08:30.187 { 00:08:30.187 "subsystems": [ 00:08:30.187 { 00:08:30.187 "subsystem": "bdev", 00:08:30.187 "config": [ 00:08:30.187 { 00:08:30.187 "params": { 00:08:30.187 "block_size": 4096, 00:08:30.187 "filename": "dd_sparse_aio_disk", 00:08:30.187 "name": "dd_aio" 00:08:30.187 }, 00:08:30.187 "method": "bdev_aio_create" 00:08:30.187 }, 00:08:30.187 { 00:08:30.187 "params": { 00:08:30.187 "lvs_name": "dd_lvstore", 00:08:30.187 "bdev_name": "dd_aio" 00:08:30.187 }, 00:08:30.187 "method": "bdev_lvol_create_lvstore" 00:08:30.187 }, 00:08:30.187 { 00:08:30.187 "method": "bdev_wait_for_examine" 00:08:30.187 } 00:08:30.187 ] 00:08:30.187 } 00:08:30.187 ] 00:08:30.187 } 00:08:30.187 [2024-11-26 16:13:55.794357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.187 [2024-11-26 16:13:55.811551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.446 [2024-11-26 16:13:55.839682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.446  [2024-11-26T16:13:56.099Z] Copying: 12/36 [MB] (average 1200 MBps) 00:08:30.446 00:08:30.446 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:30.446 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:30.446 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:30.446 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:30.446 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:30.446 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:30.446 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:30.447 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:30.447 ************************************ 00:08:30.447 END TEST dd_sparse_file_to_file 00:08:30.447 ************************************ 00:08:30.447 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:30.447 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:30.447 00:08:30.447 real 0m0.471s 00:08:30.447 user 0m0.307s 00:08:30.447 sys 0m0.219s 00:08:30.447 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.447 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:30.706 16:13:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:30.706 16:13:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.706 16:13:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.706 16:13:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:30.706 ************************************ 00:08:30.706 START TEST dd_sparse_file_to_bdev 00:08:30.706 ************************************ 00:08:30.706 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:08:30.706 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:30.706 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:30.706 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:30.706 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:30.706 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:30.706 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:30.706 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:30.706 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:30.706 [2024-11-26 16:13:56.158501] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:30.706 [2024-11-26 16:13:56.158589] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73439 ] 00:08:30.706 { 00:08:30.706 "subsystems": [ 00:08:30.706 { 00:08:30.706 "subsystem": "bdev", 00:08:30.706 "config": [ 00:08:30.706 { 00:08:30.706 "params": { 00:08:30.706 "block_size": 4096, 00:08:30.706 "filename": "dd_sparse_aio_disk", 00:08:30.706 "name": "dd_aio" 00:08:30.706 }, 00:08:30.706 "method": "bdev_aio_create" 00:08:30.706 }, 00:08:30.706 { 00:08:30.706 "params": { 00:08:30.706 "lvs_name": "dd_lvstore", 00:08:30.706 "lvol_name": "dd_lvol", 00:08:30.706 "size_in_mib": 36, 00:08:30.706 "thin_provision": true 00:08:30.706 }, 00:08:30.706 "method": "bdev_lvol_create" 00:08:30.706 }, 00:08:30.706 { 00:08:30.706 "method": "bdev_wait_for_examine" 00:08:30.706 } 00:08:30.706 ] 00:08:30.706 } 00:08:30.706 ] 00:08:30.707 } 00:08:30.707 [2024-11-26 16:13:56.296090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.707 [2024-11-26 16:13:56.314025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.707 [2024-11-26 16:13:56.343016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.966  [2024-11-26T16:13:56.619Z] Copying: 12/36 [MB] (average 500 MBps) 00:08:30.966 00:08:30.966 ************************************ 00:08:30.966 END TEST dd_sparse_file_to_bdev 00:08:30.966 ************************************ 00:08:30.966 00:08:30.966 real 0m0.448s 00:08:30.966 user 0m0.272s 00:08:30.966 sys 0m0.223s 00:08:30.966 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.966 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:30.966 16:13:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:30.966 16:13:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.966 16:13:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.966 16:13:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:30.966 ************************************ 00:08:30.966 START TEST dd_sparse_bdev_to_file 00:08:30.966 ************************************ 00:08:30.966 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:08:30.966 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:30.966 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:30.966 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:30.966 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:31.225 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:31.225 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:31.226 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:31.226 16:13:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:31.226 [2024-11-26 16:13:56.666334] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:31.226 { 00:08:31.226 "subsystems": [ 00:08:31.226 { 00:08:31.226 "subsystem": "bdev", 00:08:31.226 "config": [ 00:08:31.226 { 00:08:31.226 "params": { 00:08:31.226 "block_size": 4096, 00:08:31.226 "filename": "dd_sparse_aio_disk", 00:08:31.226 "name": "dd_aio" 00:08:31.226 }, 00:08:31.226 "method": "bdev_aio_create" 00:08:31.226 }, 00:08:31.226 { 00:08:31.226 "method": "bdev_wait_for_examine" 00:08:31.226 } 00:08:31.226 ] 00:08:31.226 } 00:08:31.226 ] 00:08:31.226 } 00:08:31.226 [2024-11-26 16:13:56.666599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73466 ] 00:08:31.226 [2024-11-26 16:13:56.810825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.226 [2024-11-26 16:13:56.829552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.226 [2024-11-26 16:13:56.856860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.485  [2024-11-26T16:13:57.138Z] Copying: 12/36 [MB] (average 1090 MBps) 00:08:31.485 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:31.485 00:08:31.485 real 0m0.459s 00:08:31.485 user 0m0.271s 00:08:31.485 sys 0m0.227s 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:31.485 ************************************ 00:08:31.485 END TEST dd_sparse_bdev_to_file 00:08:31.485 ************************************ 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:31.485 16:13:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:31.745 ************************************ 00:08:31.745 END TEST spdk_dd_sparse 00:08:31.745 ************************************ 00:08:31.745 00:08:31.745 real 0m1.789s 00:08:31.745 user 0m1.018s 00:08:31.745 sys 0m0.888s 00:08:31.745 16:13:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.745 16:13:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:31.745 16:13:57 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:31.745 16:13:57 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.745 16:13:57 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.745 16:13:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:31.745 ************************************ 00:08:31.745 START TEST spdk_dd_negative 00:08:31.745 ************************************ 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:31.745 * Looking for test storage... 00:08:31.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:31.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.745 --rc genhtml_branch_coverage=1 00:08:31.745 --rc genhtml_function_coverage=1 00:08:31.745 --rc genhtml_legend=1 00:08:31.745 --rc geninfo_all_blocks=1 00:08:31.745 --rc geninfo_unexecuted_blocks=1 00:08:31.745 00:08:31.745 ' 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:31.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.745 --rc genhtml_branch_coverage=1 00:08:31.745 --rc genhtml_function_coverage=1 00:08:31.745 --rc genhtml_legend=1 00:08:31.745 --rc geninfo_all_blocks=1 00:08:31.745 --rc geninfo_unexecuted_blocks=1 00:08:31.745 00:08:31.745 ' 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:31.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.745 --rc genhtml_branch_coverage=1 00:08:31.745 --rc genhtml_function_coverage=1 00:08:31.745 --rc genhtml_legend=1 00:08:31.745 --rc geninfo_all_blocks=1 00:08:31.745 --rc geninfo_unexecuted_blocks=1 00:08:31.745 00:08:31.745 ' 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:31.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.745 --rc genhtml_branch_coverage=1 00:08:31.745 --rc genhtml_function_coverage=1 00:08:31.745 --rc genhtml_legend=1 00:08:31.745 --rc geninfo_all_blocks=1 00:08:31.745 --rc geninfo_unexecuted_blocks=1 00:08:31.745 00:08:31.745 ' 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.745 16:13:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.746 16:13:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.746 16:13:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.746 16:13:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:31.746 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.746 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.746 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.746 ************************************ 00:08:31.746 START TEST dd_invalid_arguments 00:08:31.746 ************************************ 00:08:32.005 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:32.006 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:32.006 00:08:32.006 CPU options: 00:08:32.006 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:32.006 (like [0,1,10]) 00:08:32.006 --lcores lcore to CPU mapping list. The list is in the format: 00:08:32.006 [<,lcores[@CPUs]>...] 00:08:32.006 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:32.006 Within the group, '-' is used for range separator, 00:08:32.006 ',' is used for single number separator. 00:08:32.006 '( )' can be omitted for single element group, 00:08:32.006 '@' can be omitted if cpus and lcores have the same value 00:08:32.006 --disable-cpumask-locks Disable CPU core lock files. 00:08:32.006 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:32.006 pollers in the app support interrupt mode) 00:08:32.006 -p, --main-core main (primary) core for DPDK 00:08:32.006 00:08:32.006 Configuration options: 00:08:32.006 -c, --config, --json JSON config file 00:08:32.006 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:32.006 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:32.006 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:32.006 --rpcs-allowed comma-separated list of permitted RPCS 00:08:32.006 --json-ignore-init-errors don't exit on invalid config entry 00:08:32.006 00:08:32.006 Memory options: 00:08:32.006 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:32.006 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:32.006 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:32.006 -R, --huge-unlink unlink huge files after initialization 00:08:32.006 -n, --mem-channels number of memory channels used for DPDK 00:08:32.006 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:32.006 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:32.006 --no-huge run without using hugepages 00:08:32.006 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:32.006 -i, --shm-id shared memory ID (optional) 00:08:32.006 -g, --single-file-segments force creating just one hugetlbfs file 00:08:32.006 00:08:32.006 PCI options: 00:08:32.006 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:32.006 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:32.006 -u, --no-pci disable PCI access 00:08:32.006 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:32.006 00:08:32.006 Log options: 00:08:32.006 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:32.006 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:32.006 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:32.006 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:32.006 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:08:32.006 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:08:32.006 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:08:32.006 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:08:32.006 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:08:32.006 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:08:32.006 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:32.006 --silence-noticelog disable notice level logging to stderr 00:08:32.006 00:08:32.006 Trace options: 00:08:32.006 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:32.006 setting 0 to disable trace (default 32768) 00:08:32.006 Tracepoints vary in size and can use more than one trace entry. 00:08:32.006 -e, --tpoint-group [:] 00:08:32.006 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:32.006 [2024-11-26 16:13:57.446932] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:32.006 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:32.006 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:08:32.006 bdev_raid, scheduler, all). 00:08:32.006 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:32.006 a tracepoint group. First tpoint inside a group can be enabled by 00:08:32.006 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:32.006 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:32.006 in /include/spdk_internal/trace_defs.h 00:08:32.006 00:08:32.006 Other options: 00:08:32.006 -h, --help show this usage 00:08:32.006 -v, --version print SPDK version 00:08:32.006 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:32.006 --env-context Opaque context for use of the env implementation 00:08:32.006 00:08:32.006 Application specific: 00:08:32.006 [--------- DD Options ---------] 00:08:32.006 --if Input file. Must specify either --if or --ib. 00:08:32.006 --ib Input bdev. Must specifier either --if or --ib 00:08:32.006 --of Output file. Must specify either --of or --ob. 00:08:32.006 --ob Output bdev. Must specify either --of or --ob. 00:08:32.006 --iflag Input file flags. 00:08:32.006 --oflag Output file flags. 00:08:32.006 --bs I/O unit size (default: 4096) 00:08:32.006 --qd Queue depth (default: 2) 00:08:32.006 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:32.006 --skip Skip this many I/O units at start of input. (default: 0) 00:08:32.006 --seek Skip this many I/O units at start of output. (default: 0) 00:08:32.006 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:32.006 --sparse Enable hole skipping in input target 00:08:32.006 Available iflag and oflag values: 00:08:32.006 append - append mode 00:08:32.006 direct - use direct I/O for data 00:08:32.006 directory - fail unless a directory 00:08:32.006 dsync - use synchronized I/O for data 00:08:32.006 noatime - do not update access time 00:08:32.006 noctty - do not assign controlling terminal from file 00:08:32.006 nofollow - do not follow symlinks 00:08:32.006 nonblock - use non-blocking I/O 00:08:32.006 sync - use synchronized I/O for data and metadata 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.006 00:08:32.006 real 0m0.075s 00:08:32.006 user 0m0.043s 00:08:32.006 sys 0m0.029s 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.006 ************************************ 00:08:32.006 END TEST dd_invalid_arguments 00:08:32.006 ************************************ 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.006 ************************************ 00:08:32.006 START TEST dd_double_input 00:08:32.006 ************************************ 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.006 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:32.007 [2024-11-26 16:13:57.570429] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.007 00:08:32.007 real 0m0.072s 00:08:32.007 user 0m0.047s 00:08:32.007 sys 0m0.023s 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.007 ************************************ 00:08:32.007 END TEST dd_double_input 00:08:32.007 ************************************ 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.007 ************************************ 00:08:32.007 START TEST dd_double_output 00:08:32.007 ************************************ 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.007 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:32.266 [2024-11-26 16:13:57.697808] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:32.266 ************************************ 00:08:32.266 END TEST dd_double_output 00:08:32.266 ************************************ 00:08:32.266 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:08:32.266 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.266 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.266 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.266 00:08:32.266 real 0m0.072s 00:08:32.266 user 0m0.049s 00:08:32.266 sys 0m0.022s 00:08:32.266 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.266 16:13:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:32.266 16:13:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:32.266 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.266 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.266 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.266 ************************************ 00:08:32.266 START TEST dd_no_input 00:08:32.266 ************************************ 00:08:32.266 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:08:32.266 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:32.267 [2024-11-26 16:13:57.822062] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.267 00:08:32.267 real 0m0.075s 00:08:32.267 user 0m0.042s 00:08:32.267 sys 0m0.031s 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:32.267 ************************************ 00:08:32.267 END TEST dd_no_input 00:08:32.267 ************************************ 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.267 ************************************ 00:08:32.267 START TEST dd_no_output 00:08:32.267 ************************************ 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.267 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:32.526 [2024-11-26 16:13:57.947128] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:32.527 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:08:32.527 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.527 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.527 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.527 00:08:32.527 real 0m0.072s 00:08:32.527 user 0m0.042s 00:08:32.527 sys 0m0.029s 00:08:32.527 ************************************ 00:08:32.527 END TEST dd_no_output 00:08:32.527 ************************************ 00:08:32.527 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.527 16:13:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.527 ************************************ 00:08:32.527 START TEST dd_wrong_blocksize 00:08:32.527 ************************************ 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:32.527 [2024-11-26 16:13:58.077539] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.527 00:08:32.527 real 0m0.078s 00:08:32.527 user 0m0.050s 00:08:32.527 sys 0m0.024s 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.527 ************************************ 00:08:32.527 END TEST dd_wrong_blocksize 00:08:32.527 ************************************ 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.527 ************************************ 00:08:32.527 START TEST dd_smaller_blocksize 00:08:32.527 ************************************ 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.527 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:32.786 [2024-11-26 16:13:58.201758] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:32.786 [2024-11-26 16:13:58.202030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73698 ] 00:08:32.786 [2024-11-26 16:13:58.352738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.786 [2024-11-26 16:13:58.377129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.786 [2024-11-26 16:13:58.409616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.786 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:32.786 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:32.786 [2024-11-26 16:13:58.427309] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:32.786 [2024-11-26 16:13:58.427367] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.046 [2024-11-26 16:13:58.493288] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.046 00:08:33.046 real 0m0.410s 00:08:33.046 user 0m0.205s 00:08:33.046 sys 0m0.099s 00:08:33.046 ************************************ 00:08:33.046 END TEST dd_smaller_blocksize 00:08:33.046 ************************************ 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.046 ************************************ 00:08:33.046 START TEST dd_invalid_count 00:08:33.046 ************************************ 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:33.046 [2024-11-26 16:13:58.664876] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.046 00:08:33.046 real 0m0.072s 00:08:33.046 user 0m0.046s 00:08:33.046 sys 0m0.024s 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.046 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:33.046 ************************************ 00:08:33.046 END TEST dd_invalid_count 00:08:33.046 ************************************ 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.306 ************************************ 00:08:33.306 START TEST dd_invalid_oflag 00:08:33.306 ************************************ 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:33.306 [2024-11-26 16:13:58.796522] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.306 00:08:33.306 real 0m0.074s 00:08:33.306 user 0m0.043s 00:08:33.306 sys 0m0.029s 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.306 ************************************ 00:08:33.306 END TEST dd_invalid_oflag 00:08:33.306 ************************************ 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.306 ************************************ 00:08:33.306 START TEST dd_invalid_iflag 00:08:33.306 ************************************ 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:33.306 [2024-11-26 16:13:58.924442] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.306 ************************************ 00:08:33.306 END TEST dd_invalid_iflag 00:08:33.306 ************************************ 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.306 00:08:33.306 real 0m0.075s 00:08:33.306 user 0m0.048s 00:08:33.306 sys 0m0.025s 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.306 16:13:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.565 ************************************ 00:08:33.565 START TEST dd_unknown_flag 00:08:33.565 ************************************ 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.565 16:13:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:33.565 [2024-11-26 16:13:59.047018] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:33.565 [2024-11-26 16:13:59.047490] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73784 ] 00:08:33.565 [2024-11-26 16:13:59.195211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.823 [2024-11-26 16:13:59.219680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.823 [2024-11-26 16:13:59.252333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.823 [2024-11-26 16:13:59.269510] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:33.823 [2024-11-26 16:13:59.269579] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.823 [2024-11-26 16:13:59.269640] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:33.823 [2024-11-26 16:13:59.269656] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.823 [2024-11-26 16:13:59.269895] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:33.823 [2024-11-26 16:13:59.269915] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.823 [2024-11-26 16:13:59.269976] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:33.823 [2024-11-26 16:13:59.269996] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:33.823 [2024-11-26 16:13:59.335828] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:08:33.823 ************************************ 00:08:33.823 END TEST dd_unknown_flag 00:08:33.823 ************************************ 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.823 00:08:33.823 real 0m0.407s 00:08:33.823 user 0m0.202s 00:08:33.823 sys 0m0.112s 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.823 ************************************ 00:08:33.823 START TEST dd_invalid_json 00:08:33.823 ************************************ 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.823 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.824 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.824 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.824 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.824 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.824 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:34.082 [2024-11-26 16:13:59.512861] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:34.082 [2024-11-26 16:13:59.512955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73813 ] 00:08:34.082 [2024-11-26 16:13:59.664448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.082 [2024-11-26 16:13:59.687753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.082 [2024-11-26 16:13:59.687843] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:34.082 [2024-11-26 16:13:59.687865] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:34.083 [2024-11-26 16:13:59.687876] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.083 [2024-11-26 16:13:59.687917] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.342 00:08:34.342 real 0m0.283s 00:08:34.342 user 0m0.121s 00:08:34.342 sys 0m0.059s 00:08:34.342 ************************************ 00:08:34.342 END TEST dd_invalid_json 00:08:34.342 ************************************ 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:34.342 ************************************ 00:08:34.342 START TEST dd_invalid_seek 00:08:34.342 ************************************ 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.342 16:13:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:34.342 [2024-11-26 16:13:59.858495] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:34.342 [2024-11-26 16:13:59.858588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73837 ] 00:08:34.342 { 00:08:34.342 "subsystems": [ 00:08:34.342 { 00:08:34.342 "subsystem": "bdev", 00:08:34.342 "config": [ 00:08:34.342 { 00:08:34.342 "params": { 00:08:34.342 "block_size": 512, 00:08:34.342 "num_blocks": 512, 00:08:34.342 "name": "malloc0" 00:08:34.342 }, 00:08:34.342 "method": "bdev_malloc_create" 00:08:34.342 }, 00:08:34.342 { 00:08:34.342 "params": { 00:08:34.342 "block_size": 512, 00:08:34.342 "num_blocks": 512, 00:08:34.342 "name": "malloc1" 00:08:34.342 }, 00:08:34.342 "method": "bdev_malloc_create" 00:08:34.342 }, 00:08:34.342 { 00:08:34.342 "method": "bdev_wait_for_examine" 00:08:34.343 } 00:08:34.343 ] 00:08:34.343 } 00:08:34.343 ] 00:08:34.343 } 00:08:34.602 [2024-11-26 16:14:00.011055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.602 [2024-11-26 16:14:00.035169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.602 [2024-11-26 16:14:00.068181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.602 [2024-11-26 16:14:00.111649] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:34.602 [2024-11-26 16:14:00.111717] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.602 [2024-11-26 16:14:00.177702] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:34.602 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:08:34.602 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.602 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:08:34.602 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:08:34.602 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:08:34.602 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.602 00:08:34.602 real 0m0.436s 00:08:34.602 user 0m0.284s 00:08:34.602 sys 0m0.115s 00:08:34.602 ************************************ 00:08:34.602 END TEST dd_invalid_seek 00:08:34.602 ************************************ 00:08:34.602 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.602 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:34.862 ************************************ 00:08:34.862 START TEST dd_invalid_skip 00:08:34.862 ************************************ 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.862 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:34.862 [2024-11-26 16:14:00.344749] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:34.862 [2024-11-26 16:14:00.345215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73876 ] 00:08:34.862 { 00:08:34.862 "subsystems": [ 00:08:34.862 { 00:08:34.862 "subsystem": "bdev", 00:08:34.862 "config": [ 00:08:34.862 { 00:08:34.862 "params": { 00:08:34.862 "block_size": 512, 00:08:34.862 "num_blocks": 512, 00:08:34.862 "name": "malloc0" 00:08:34.862 }, 00:08:34.862 "method": "bdev_malloc_create" 00:08:34.862 }, 00:08:34.862 { 00:08:34.862 "params": { 00:08:34.862 "block_size": 512, 00:08:34.862 "num_blocks": 512, 00:08:34.862 "name": "malloc1" 00:08:34.862 }, 00:08:34.862 "method": "bdev_malloc_create" 00:08:34.862 }, 00:08:34.862 { 00:08:34.862 "method": "bdev_wait_for_examine" 00:08:34.862 } 00:08:34.862 ] 00:08:34.862 } 00:08:34.862 ] 00:08:34.862 } 00:08:34.862 [2024-11-26 16:14:00.498841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.122 [2024-11-26 16:14:00.522862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.122 [2024-11-26 16:14:00.555957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.122 [2024-11-26 16:14:00.599699] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:35.122 [2024-11-26 16:14:00.599776] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.122 [2024-11-26 16:14:00.668231] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:35.122 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:08:35.122 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.122 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:08:35.122 ************************************ 00:08:35.122 END TEST dd_invalid_skip 00:08:35.122 ************************************ 00:08:35.122 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:08:35.122 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:08:35.122 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.122 00:08:35.122 real 0m0.442s 00:08:35.122 user 0m0.286s 00:08:35.122 sys 0m0.116s 00:08:35.122 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.122 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:35.122 16:14:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:35.122 16:14:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.122 16:14:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.122 16:14:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:35.382 ************************************ 00:08:35.382 START TEST dd_invalid_input_count 00:08:35.382 ************************************ 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.382 16:14:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:35.382 { 00:08:35.382 "subsystems": [ 00:08:35.382 { 00:08:35.382 "subsystem": "bdev", 00:08:35.382 "config": [ 00:08:35.382 { 00:08:35.382 "params": { 00:08:35.382 "block_size": 512, 00:08:35.382 "num_blocks": 512, 00:08:35.382 "name": "malloc0" 00:08:35.382 }, 00:08:35.382 "method": "bdev_malloc_create" 00:08:35.382 }, 00:08:35.382 { 00:08:35.382 "params": { 00:08:35.382 "block_size": 512, 00:08:35.382 "num_blocks": 512, 00:08:35.382 "name": "malloc1" 00:08:35.382 }, 00:08:35.382 "method": "bdev_malloc_create" 00:08:35.382 }, 00:08:35.382 { 00:08:35.382 "method": "bdev_wait_for_examine" 00:08:35.382 } 00:08:35.382 ] 00:08:35.382 } 00:08:35.382 ] 00:08:35.382 } 00:08:35.382 [2024-11-26 16:14:00.835302] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:35.382 [2024-11-26 16:14:00.835449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73904 ] 00:08:35.382 [2024-11-26 16:14:00.982984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.382 [2024-11-26 16:14:01.003241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.642 [2024-11-26 16:14:01.032571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.642 [2024-11-26 16:14:01.072632] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:35.642 [2024-11-26 16:14:01.072714] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.642 [2024-11-26 16:14:01.128349] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.642 00:08:35.642 real 0m0.396s 00:08:35.642 user 0m0.230s 00:08:35.642 sys 0m0.118s 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.642 ************************************ 00:08:35.642 END TEST dd_invalid_input_count 00:08:35.642 ************************************ 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:35.642 ************************************ 00:08:35.642 START TEST dd_invalid_output_count 00:08:35.642 ************************************ 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.642 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:35.642 [2024-11-26 16:14:01.279035] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:35.642 [2024-11-26 16:14:01.279141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73943 ] 00:08:35.642 { 00:08:35.642 "subsystems": [ 00:08:35.642 { 00:08:35.642 "subsystem": "bdev", 00:08:35.642 "config": [ 00:08:35.642 { 00:08:35.642 "params": { 00:08:35.642 "block_size": 512, 00:08:35.642 "num_blocks": 512, 00:08:35.642 "name": "malloc0" 00:08:35.642 }, 00:08:35.642 "method": "bdev_malloc_create" 00:08:35.642 }, 00:08:35.642 { 00:08:35.642 "method": "bdev_wait_for_examine" 00:08:35.642 } 00:08:35.642 ] 00:08:35.642 } 00:08:35.642 ] 00:08:35.642 } 00:08:35.901 [2024-11-26 16:14:01.424539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.901 [2024-11-26 16:14:01.442224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.901 [2024-11-26 16:14:01.469163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.901 [2024-11-26 16:14:01.501211] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:35.901 [2024-11-26 16:14:01.501296] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.161 [2024-11-26 16:14:01.557567] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:36.161 00:08:36.161 real 0m0.389s 00:08:36.161 user 0m0.247s 00:08:36.161 sys 0m0.099s 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:36.161 ************************************ 00:08:36.161 END TEST dd_invalid_output_count 00:08:36.161 ************************************ 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:36.161 ************************************ 00:08:36.161 START TEST dd_bs_not_multiple 00:08:36.161 ************************************ 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:36.161 16:14:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:36.161 [2024-11-26 16:14:01.717138] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:36.162 [2024-11-26 16:14:01.717230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73969 ] 00:08:36.162 { 00:08:36.162 "subsystems": [ 00:08:36.162 { 00:08:36.162 "subsystem": "bdev", 00:08:36.162 "config": [ 00:08:36.162 { 00:08:36.162 "params": { 00:08:36.162 "block_size": 512, 00:08:36.162 "num_blocks": 512, 00:08:36.162 "name": "malloc0" 00:08:36.162 }, 00:08:36.162 "method": "bdev_malloc_create" 00:08:36.162 }, 00:08:36.162 { 00:08:36.162 "params": { 00:08:36.162 "block_size": 512, 00:08:36.162 "num_blocks": 512, 00:08:36.162 "name": "malloc1" 00:08:36.162 }, 00:08:36.162 "method": "bdev_malloc_create" 00:08:36.162 }, 00:08:36.162 { 00:08:36.162 "method": "bdev_wait_for_examine" 00:08:36.162 } 00:08:36.162 ] 00:08:36.162 } 00:08:36.162 ] 00:08:36.162 } 00:08:36.420 [2024-11-26 16:14:01.861277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.420 [2024-11-26 16:14:01.879138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.420 [2024-11-26 16:14:01.906381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.421 [2024-11-26 16:14:01.946786] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:36.421 [2024-11-26 16:14:01.946872] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.421 [2024-11-26 16:14:02.008222] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:36.421 16:14:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:08:36.421 16:14:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:36.421 16:14:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:08:36.421 16:14:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:08:36.421 16:14:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:08:36.421 16:14:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:36.680 00:08:36.680 real 0m0.407s 00:08:36.680 user 0m0.265s 00:08:36.680 sys 0m0.101s 00:08:36.680 16:14:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.680 16:14:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:36.680 ************************************ 00:08:36.680 END TEST dd_bs_not_multiple 00:08:36.680 ************************************ 00:08:36.680 00:08:36.680 real 0m4.926s 00:08:36.680 user 0m2.651s 00:08:36.680 sys 0m1.670s 00:08:36.680 16:14:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.680 16:14:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:36.680 ************************************ 00:08:36.680 END TEST spdk_dd_negative 00:08:36.680 ************************************ 00:08:36.680 00:08:36.680 real 1m0.924s 00:08:36.680 user 0m38.372s 00:08:36.680 sys 0m25.720s 00:08:36.680 16:14:02 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.681 ************************************ 00:08:36.681 END TEST spdk_dd 00:08:36.681 ************************************ 00:08:36.681 16:14:02 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:36.681 16:14:02 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:36.681 16:14:02 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:36.681 16:14:02 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:36.681 16:14:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:36.681 16:14:02 -- common/autotest_common.sh@10 -- # set +x 00:08:36.681 16:14:02 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:36.681 16:14:02 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:36.681 16:14:02 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:08:36.681 16:14:02 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:08:36.681 16:14:02 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:08:36.681 16:14:02 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:08:36.681 16:14:02 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:36.681 16:14:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:36.681 16:14:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.681 16:14:02 -- common/autotest_common.sh@10 -- # set +x 00:08:36.681 ************************************ 00:08:36.681 START TEST nvmf_tcp 00:08:36.681 ************************************ 00:08:36.681 16:14:02 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:36.681 * Looking for test storage... 00:08:36.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:36.681 16:14:02 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:36.681 16:14:02 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:36.681 16:14:02 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:36.941 16:14:02 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.941 16:14:02 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:36.941 16:14:02 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.941 16:14:02 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:36.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.941 --rc genhtml_branch_coverage=1 00:08:36.941 --rc genhtml_function_coverage=1 00:08:36.941 --rc genhtml_legend=1 00:08:36.941 --rc geninfo_all_blocks=1 00:08:36.941 --rc geninfo_unexecuted_blocks=1 00:08:36.941 00:08:36.941 ' 00:08:36.941 16:14:02 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:36.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.941 --rc genhtml_branch_coverage=1 00:08:36.941 --rc genhtml_function_coverage=1 00:08:36.941 --rc genhtml_legend=1 00:08:36.941 --rc geninfo_all_blocks=1 00:08:36.941 --rc geninfo_unexecuted_blocks=1 00:08:36.941 00:08:36.941 ' 00:08:36.941 16:14:02 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:36.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.941 --rc genhtml_branch_coverage=1 00:08:36.941 --rc genhtml_function_coverage=1 00:08:36.941 --rc genhtml_legend=1 00:08:36.941 --rc geninfo_all_blocks=1 00:08:36.941 --rc geninfo_unexecuted_blocks=1 00:08:36.941 00:08:36.941 ' 00:08:36.941 16:14:02 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:36.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.941 --rc genhtml_branch_coverage=1 00:08:36.941 --rc genhtml_function_coverage=1 00:08:36.941 --rc genhtml_legend=1 00:08:36.941 --rc geninfo_all_blocks=1 00:08:36.941 --rc geninfo_unexecuted_blocks=1 00:08:36.941 00:08:36.941 ' 00:08:36.941 16:14:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:36.941 16:14:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:36.941 16:14:02 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:36.941 16:14:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:36.941 16:14:02 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.941 16:14:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:36.941 ************************************ 00:08:36.941 START TEST nvmf_target_core 00:08:36.941 ************************************ 00:08:36.941 16:14:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:36.941 * Looking for test storage... 00:08:36.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:36.941 16:14:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:36.941 16:14:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:08:36.941 16:14:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.201 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:37.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.202 --rc genhtml_branch_coverage=1 00:08:37.202 --rc genhtml_function_coverage=1 00:08:37.202 --rc genhtml_legend=1 00:08:37.202 --rc geninfo_all_blocks=1 00:08:37.202 --rc geninfo_unexecuted_blocks=1 00:08:37.202 00:08:37.202 ' 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:37.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.202 --rc genhtml_branch_coverage=1 00:08:37.202 --rc genhtml_function_coverage=1 00:08:37.202 --rc genhtml_legend=1 00:08:37.202 --rc geninfo_all_blocks=1 00:08:37.202 --rc geninfo_unexecuted_blocks=1 00:08:37.202 00:08:37.202 ' 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:37.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.202 --rc genhtml_branch_coverage=1 00:08:37.202 --rc genhtml_function_coverage=1 00:08:37.202 --rc genhtml_legend=1 00:08:37.202 --rc geninfo_all_blocks=1 00:08:37.202 --rc geninfo_unexecuted_blocks=1 00:08:37.202 00:08:37.202 ' 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:37.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.202 --rc genhtml_branch_coverage=1 00:08:37.202 --rc genhtml_function_coverage=1 00:08:37.202 --rc genhtml_legend=1 00:08:37.202 --rc geninfo_all_blocks=1 00:08:37.202 --rc geninfo_unexecuted_blocks=1 00:08:37.202 00:08:37.202 ' 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.202 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.202 ************************************ 00:08:37.202 START TEST nvmf_host_management 00:08:37.202 ************************************ 00:08:37.202 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:37.202 * Looking for test storage... 00:08:37.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.203 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.462 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:37.462 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:37.462 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.462 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:37.462 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.462 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:37.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.463 --rc genhtml_branch_coverage=1 00:08:37.463 --rc genhtml_function_coverage=1 00:08:37.463 --rc genhtml_legend=1 00:08:37.463 --rc geninfo_all_blocks=1 00:08:37.463 --rc geninfo_unexecuted_blocks=1 00:08:37.463 00:08:37.463 ' 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:37.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.463 --rc genhtml_branch_coverage=1 00:08:37.463 --rc genhtml_function_coverage=1 00:08:37.463 --rc genhtml_legend=1 00:08:37.463 --rc geninfo_all_blocks=1 00:08:37.463 --rc geninfo_unexecuted_blocks=1 00:08:37.463 00:08:37.463 ' 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:37.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.463 --rc genhtml_branch_coverage=1 00:08:37.463 --rc genhtml_function_coverage=1 00:08:37.463 --rc genhtml_legend=1 00:08:37.463 --rc geninfo_all_blocks=1 00:08:37.463 --rc geninfo_unexecuted_blocks=1 00:08:37.463 00:08:37.463 ' 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:37.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.463 --rc genhtml_branch_coverage=1 00:08:37.463 --rc genhtml_function_coverage=1 00:08:37.463 --rc genhtml_legend=1 00:08:37.463 --rc geninfo_all_blocks=1 00:08:37.463 --rc geninfo_unexecuted_blocks=1 00:08:37.463 00:08:37.463 ' 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.463 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:37.463 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:37.464 Cannot find device "nvmf_init_br" 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:37.464 Cannot find device "nvmf_init_br2" 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:37.464 Cannot find device "nvmf_tgt_br" 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:37.464 Cannot find device "nvmf_tgt_br2" 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:37.464 Cannot find device "nvmf_init_br" 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:37.464 Cannot find device "nvmf_init_br2" 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:37.464 Cannot find device "nvmf_tgt_br" 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:37.464 Cannot find device "nvmf_tgt_br2" 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:37.464 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:37.464 Cannot find device "nvmf_br" 00:08:37.464 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:37.464 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:37.464 Cannot find device "nvmf_init_if" 00:08:37.464 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:37.464 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:37.464 Cannot find device "nvmf_init_if2" 00:08:37.464 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:37.464 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:37.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:37.464 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:37.464 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:37.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:37.464 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:37.464 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:37.464 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:37.464 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:37.464 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:37.464 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:37.723 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:37.983 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:37.983 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:08:37.983 00:08:37.983 --- 10.0.0.3 ping statistics --- 00:08:37.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.983 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:37.983 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:37.983 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:08:37.983 00:08:37.983 --- 10.0.0.4 ping statistics --- 00:08:37.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.983 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:37.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:37.983 00:08:37.983 --- 10.0.0.1 ping statistics --- 00:08:37.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.983 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:37.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:08:37.983 00:08:37.983 --- 10.0.0.2 ping statistics --- 00:08:37.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.983 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=74309 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 74309 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 74309 ']' 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.983 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.983 [2024-11-26 16:14:03.496028] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:37.983 [2024-11-26 16:14:03.496129] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.242 [2024-11-26 16:14:03.649665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.242 [2024-11-26 16:14:03.677725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.242 [2024-11-26 16:14:03.677794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.242 [2024-11-26 16:14:03.677819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.242 [2024-11-26 16:14:03.677829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.242 [2024-11-26 16:14:03.677838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.242 [2024-11-26 16:14:03.678808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.242 [2024-11-26 16:14:03.678955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.242 [2024-11-26 16:14:03.679101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:38.242 [2024-11-26 16:14:03.679102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.242 [2024-11-26 16:14:03.713332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.242 [2024-11-26 16:14:03.813091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.242 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.242 Malloc0 00:08:38.501 [2024-11-26 16:14:03.890891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=74350 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 74350 /var/tmp/bdevperf.sock 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 74350 ']' 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:38.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:38.501 { 00:08:38.501 "params": { 00:08:38.501 "name": "Nvme$subsystem", 00:08:38.501 "trtype": "$TEST_TRANSPORT", 00:08:38.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:38.501 "adrfam": "ipv4", 00:08:38.501 "trsvcid": "$NVMF_PORT", 00:08:38.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:38.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:38.501 "hdgst": ${hdgst:-false}, 00:08:38.501 "ddgst": ${ddgst:-false} 00:08:38.501 }, 00:08:38.501 "method": "bdev_nvme_attach_controller" 00:08:38.501 } 00:08:38.501 EOF 00:08:38.501 )") 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:38.501 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:38.501 "params": { 00:08:38.501 "name": "Nvme0", 00:08:38.501 "trtype": "tcp", 00:08:38.501 "traddr": "10.0.0.3", 00:08:38.501 "adrfam": "ipv4", 00:08:38.501 "trsvcid": "4420", 00:08:38.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:38.501 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:38.501 "hdgst": false, 00:08:38.501 "ddgst": false 00:08:38.501 }, 00:08:38.501 "method": "bdev_nvme_attach_controller" 00:08:38.501 }' 00:08:38.501 [2024-11-26 16:14:03.995223] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:38.501 [2024-11-26 16:14:03.995319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74350 ] 00:08:38.760 [2024-11-26 16:14:04.148663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.760 [2024-11-26 16:14:04.172621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.760 [2024-11-26 16:14:04.214456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.760 Running I/O for 10 seconds... 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.760 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.018 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.019 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:39.019 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:39.019 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:39.279 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:39.279 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:39.279 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:39.279 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.279 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:39.279 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.279 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.279 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:39.279 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:39.279 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:39.279 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:39.279 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:39.279 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:39.279 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.279 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.279 [2024-11-26 16:14:04.747757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.279 [2024-11-26 16:14:04.747820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.279 [2024-11-26 16:14:04.747856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.279 [2024-11-26 16:14:04.747867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.279 [2024-11-26 16:14:04.747879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.279 [2024-11-26 16:14:04.747888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.279 [2024-11-26 16:14:04.747899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.279 [2024-11-26 16:14:04.747908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.279 [2024-11-26 16:14:04.747919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.279 [2024-11-26 16:14:04.747928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.279 [2024-11-26 16:14:04.747938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.279 [2024-11-26 16:14:04.747948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.279 [2024-11-26 16:14:04.747958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.279 [2024-11-26 16:14:04.747967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.279 [2024-11-26 16:14:04.747978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.279 [2024-11-26 16:14:04.747986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.279 [2024-11-26 16:14:04.747998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.279 [2024-11-26 16:14:04.748007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.279 [2024-11-26 16:14:04.748018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.279 [2024-11-26 16:14:04.748026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.279 [2024-11-26 16:14:04.748037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.279 [2024-11-26 16:14:04.748046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.279 [2024-11-26 16:14:04.748056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.279 [2024-11-26 16:14:04.748065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.279 [2024-11-26 16:14:04.748093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.279 [2024-11-26 16:14:04.748103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.279 [2024-11-26 16:14:04.748114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.280 [2024-11-26 16:14:04.748801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.280 [2024-11-26 16:14:04.748811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.748821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.748830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.748841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.748850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.748861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.748870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.748881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.748889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.748901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.748910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.748920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.748929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.748940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.748949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.748959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.748968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.748979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.748988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.748998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.749007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.749018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.749026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.749037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.749046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.749056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.749065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.749075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.749084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.749095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.749105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.749117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.749126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.749137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.749146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.749156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.749165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.749176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:39.281 [2024-11-26 16:14:04.749184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:39.281 [2024-11-26 16:14:04.750477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:39.281 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.281 task offset: 86784 on job bdev=Nvme0n1 fails 00:08:39.281 00:08:39.281 Latency(us) 00:08:39.281 [2024-11-26T16:14:04.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.281 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:39.281 Job: Nvme0n1 ended in about 0.43 seconds with error 00:08:39.281 Verification LBA range: start 0x0 length 0x400 00:08:39.281 Nvme0n1 : 0.43 1473.49 92.09 147.35 0.00 38234.31 2070.34 35985.22 00:08:39.281 [2024-11-26T16:14:04.934Z] =================================================================================================================== 00:08:39.281 [2024-11-26T16:14:04.934Z] Total : 1473.49 92.09 147.35 0.00 38234.31 2070.34 35985.22 00:08:39.281 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:39.281 [2024-11-26 16:14:04.752562] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:39.281 [2024-11-26 16:14:04.752594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73b7d0 (9): Bad file descriptor 00:08:39.281 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.281 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.281 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.281 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:39.281 [2024-11-26 16:14:04.763252] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:40.216 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 74350 00:08:40.216 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (74350) - No such process 00:08:40.216 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:40.216 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:40.216 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:40.216 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:40.216 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:40.216 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.216 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.216 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.216 { 00:08:40.216 "params": { 00:08:40.216 "name": "Nvme$subsystem", 00:08:40.216 "trtype": "$TEST_TRANSPORT", 00:08:40.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.216 "adrfam": "ipv4", 00:08:40.216 "trsvcid": "$NVMF_PORT", 00:08:40.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.216 "hdgst": ${hdgst:-false}, 00:08:40.216 "ddgst": ${ddgst:-false} 00:08:40.216 }, 00:08:40.216 "method": "bdev_nvme_attach_controller" 00:08:40.216 } 00:08:40.216 EOF 00:08:40.216 )") 00:08:40.216 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:40.216 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:40.216 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:40.216 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.216 "params": { 00:08:40.216 "name": "Nvme0", 00:08:40.216 "trtype": "tcp", 00:08:40.216 "traddr": "10.0.0.3", 00:08:40.216 "adrfam": "ipv4", 00:08:40.216 "trsvcid": "4420", 00:08:40.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:40.216 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:40.216 "hdgst": false, 00:08:40.216 "ddgst": false 00:08:40.216 }, 00:08:40.216 "method": "bdev_nvme_attach_controller" 00:08:40.216 }' 00:08:40.216 [2024-11-26 16:14:05.824923] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:40.216 [2024-11-26 16:14:05.825451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74390 ] 00:08:40.475 [2024-11-26 16:14:05.974857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.475 [2024-11-26 16:14:05.993975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.475 [2024-11-26 16:14:06.030516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.734 Running I/O for 1 seconds... 00:08:41.671 1600.00 IOPS, 100.00 MiB/s 00:08:41.671 Latency(us) 00:08:41.671 [2024-11-26T16:14:07.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.671 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:41.671 Verification LBA range: start 0x0 length 0x400 00:08:41.671 Nvme0n1 : 1.01 1651.61 103.23 0.00 0.00 38018.21 5183.30 34793.66 00:08:41.671 [2024-11-26T16:14:07.324Z] =================================================================================================================== 00:08:41.671 [2024-11-26T16:14:07.324Z] Total : 1651.61 103.23 0.00 0.00 38018.21 5183.30 34793.66 00:08:41.671 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:41.671 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:41.671 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:41.671 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:41.671 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:41.671 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:41.671 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:41.930 rmmod nvme_tcp 00:08:41.930 rmmod nvme_fabrics 00:08:41.930 rmmod nvme_keyring 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 74309 ']' 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 74309 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 74309 ']' 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 74309 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74309 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:41.930 killing process with pid 74309 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74309' 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 74309 00:08:41.930 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 74309 00:08:41.930 [2024-11-26 16:14:07.552443] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:42.189 00:08:42.189 real 0m5.139s 00:08:42.189 user 0m17.776s 00:08:42.189 sys 0m1.371s 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.189 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:42.189 ************************************ 00:08:42.189 END TEST nvmf_host_management 00:08:42.189 ************************************ 00:08:42.449 16:14:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:42.449 16:14:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.449 16:14:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.449 16:14:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.449 ************************************ 00:08:42.449 START TEST nvmf_lvol 00:08:42.449 ************************************ 00:08:42.449 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:42.449 * Looking for test storage... 00:08:42.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:42.449 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:42.449 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:42.449 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:42.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.449 --rc genhtml_branch_coverage=1 00:08:42.449 --rc genhtml_function_coverage=1 00:08:42.449 --rc genhtml_legend=1 00:08:42.449 --rc geninfo_all_blocks=1 00:08:42.449 --rc geninfo_unexecuted_blocks=1 00:08:42.449 00:08:42.449 ' 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:42.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.449 --rc genhtml_branch_coverage=1 00:08:42.449 --rc genhtml_function_coverage=1 00:08:42.449 --rc genhtml_legend=1 00:08:42.449 --rc geninfo_all_blocks=1 00:08:42.449 --rc geninfo_unexecuted_blocks=1 00:08:42.449 00:08:42.449 ' 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:42.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.449 --rc genhtml_branch_coverage=1 00:08:42.449 --rc genhtml_function_coverage=1 00:08:42.449 --rc genhtml_legend=1 00:08:42.449 --rc geninfo_all_blocks=1 00:08:42.449 --rc geninfo_unexecuted_blocks=1 00:08:42.449 00:08:42.449 ' 00:08:42.449 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:42.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.450 --rc genhtml_branch_coverage=1 00:08:42.450 --rc genhtml_function_coverage=1 00:08:42.450 --rc genhtml_legend=1 00:08:42.450 --rc geninfo_all_blocks=1 00:08:42.450 --rc geninfo_unexecuted_blocks=1 00:08:42.450 00:08:42.450 ' 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.450 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:42.450 Cannot find device "nvmf_init_br" 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:42.450 Cannot find device "nvmf_init_br2" 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:42.450 Cannot find device "nvmf_tgt_br" 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:42.450 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:42.709 Cannot find device "nvmf_tgt_br2" 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:42.710 Cannot find device "nvmf_init_br" 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:42.710 Cannot find device "nvmf_init_br2" 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:42.710 Cannot find device "nvmf_tgt_br" 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:42.710 Cannot find device "nvmf_tgt_br2" 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:42.710 Cannot find device "nvmf_br" 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:42.710 Cannot find device "nvmf_init_if" 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:42.710 Cannot find device "nvmf_init_if2" 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:42.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:42.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:42.710 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:42.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:42.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:08:42.969 00:08:42.969 --- 10.0.0.3 ping statistics --- 00:08:42.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.969 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:42.969 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:42.969 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:08:42.969 00:08:42.969 --- 10.0.0.4 ping statistics --- 00:08:42.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.969 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:42.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:42.969 00:08:42.969 --- 10.0.0.1 ping statistics --- 00:08:42.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.969 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:42.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:08:42.969 00:08:42.969 --- 10.0.0.2 ping statistics --- 00:08:42.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.969 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=74657 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 74657 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 74657 ']' 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.969 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.970 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:42.970 [2024-11-26 16:14:08.551810] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:42.970 [2024-11-26 16:14:08.552406] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.228 [2024-11-26 16:14:08.695750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:43.228 [2024-11-26 16:14:08.714910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.228 [2024-11-26 16:14:08.714997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.228 [2024-11-26 16:14:08.715024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.228 [2024-11-26 16:14:08.715031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.228 [2024-11-26 16:14:08.715037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.228 [2024-11-26 16:14:08.715812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.228 [2024-11-26 16:14:08.715969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.228 [2024-11-26 16:14:08.715974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.228 [2024-11-26 16:14:08.745461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.228 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.228 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:43.228 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:43.228 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:43.228 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:43.228 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.228 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:43.487 [2024-11-26 16:14:09.046468] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.487 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:44.071 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:44.071 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:44.071 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:44.071 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:44.330 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:44.589 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=19d16d6a-cd42-42ed-ac8f-22d0d2fdf22d 00:08:44.589 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 19d16d6a-cd42-42ed-ac8f-22d0d2fdf22d lvol 20 00:08:45.156 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=29101918-2649-43a0-8a69-038abe89ae07 00:08:45.156 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:45.156 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 29101918-2649-43a0-8a69-038abe89ae07 00:08:45.414 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:45.673 [2024-11-26 16:14:11.203418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:45.673 16:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:45.932 16:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=74724 00:08:45.932 16:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:45.932 16:14:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:46.865 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 29101918-2649-43a0-8a69-038abe89ae07 MY_SNAPSHOT 00:08:47.440 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=257c5974-2872-47aa-b91f-700dab8fcec5 00:08:47.440 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 29101918-2649-43a0-8a69-038abe89ae07 30 00:08:47.722 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 257c5974-2872-47aa-b91f-700dab8fcec5 MY_CLONE 00:08:47.997 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=63faec7b-b8c2-484e-bf92-958c56675327 00:08:47.997 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 63faec7b-b8c2-484e-bf92-958c56675327 00:08:48.563 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 74724 00:08:56.673 Initializing NVMe Controllers 00:08:56.673 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:56.673 Controller IO queue size 128, less than required. 00:08:56.673 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:56.673 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:56.673 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:56.673 Initialization complete. Launching workers. 00:08:56.673 ======================================================== 00:08:56.673 Latency(us) 00:08:56.673 Device Information : IOPS MiB/s Average min max 00:08:56.673 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10555.20 41.23 12128.66 2391.84 77268.69 00:08:56.673 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10402.90 40.64 12313.97 1852.38 68409.92 00:08:56.673 ======================================================== 00:08:56.673 Total : 20958.10 81.87 12220.64 1852.38 77268.69 00:08:56.673 00:08:56.673 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:56.673 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 29101918-2649-43a0-8a69-038abe89ae07 00:08:56.673 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 19d16d6a-cd42-42ed-ac8f-22d0d2fdf22d 00:08:56.931 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:56.931 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:56.931 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:56.931 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:56.931 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:57.191 rmmod nvme_tcp 00:08:57.191 rmmod nvme_fabrics 00:08:57.191 rmmod nvme_keyring 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 74657 ']' 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 74657 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 74657 ']' 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 74657 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74657 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:57.191 killing process with pid 74657 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74657' 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 74657 00:08:57.191 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 74657 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:57.506 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:57.506 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:57.506 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:57.506 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:57.506 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:57.506 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:57.506 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.506 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.506 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.506 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:57.506 00:08:57.506 real 0m15.275s 00:08:57.506 user 1m3.251s 00:08:57.506 sys 0m4.219s 00:08:57.506 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.506 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:57.506 ************************************ 00:08:57.506 END TEST nvmf_lvol 00:08:57.506 ************************************ 00:08:57.765 16:14:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:57.765 16:14:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:57.765 16:14:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.765 16:14:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.765 ************************************ 00:08:57.765 START TEST nvmf_lvs_grow 00:08:57.765 ************************************ 00:08:57.765 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:57.765 * Looking for test storage... 00:08:57.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:57.765 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:57.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.766 --rc genhtml_branch_coverage=1 00:08:57.766 --rc genhtml_function_coverage=1 00:08:57.766 --rc genhtml_legend=1 00:08:57.766 --rc geninfo_all_blocks=1 00:08:57.766 --rc geninfo_unexecuted_blocks=1 00:08:57.766 00:08:57.766 ' 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:57.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.766 --rc genhtml_branch_coverage=1 00:08:57.766 --rc genhtml_function_coverage=1 00:08:57.766 --rc genhtml_legend=1 00:08:57.766 --rc geninfo_all_blocks=1 00:08:57.766 --rc geninfo_unexecuted_blocks=1 00:08:57.766 00:08:57.766 ' 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:57.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.766 --rc genhtml_branch_coverage=1 00:08:57.766 --rc genhtml_function_coverage=1 00:08:57.766 --rc genhtml_legend=1 00:08:57.766 --rc geninfo_all_blocks=1 00:08:57.766 --rc geninfo_unexecuted_blocks=1 00:08:57.766 00:08:57.766 ' 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:57.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.766 --rc genhtml_branch_coverage=1 00:08:57.766 --rc genhtml_function_coverage=1 00:08:57.766 --rc genhtml_legend=1 00:08:57.766 --rc geninfo_all_blocks=1 00:08:57.766 --rc geninfo_unexecuted_blocks=1 00:08:57.766 00:08:57.766 ' 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:57.766 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.766 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:57.767 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:57.767 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:57.767 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.767 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.026 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:58.027 Cannot find device "nvmf_init_br" 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:58.027 Cannot find device "nvmf_init_br2" 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:58.027 Cannot find device "nvmf_tgt_br" 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.027 Cannot find device "nvmf_tgt_br2" 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:58.027 Cannot find device "nvmf_init_br" 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:58.027 Cannot find device "nvmf_init_br2" 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:58.027 Cannot find device "nvmf_tgt_br" 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:58.027 Cannot find device "nvmf_tgt_br2" 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:58.027 Cannot find device "nvmf_br" 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:58.027 Cannot find device "nvmf_init_if" 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:58.027 Cannot find device "nvmf_init_if2" 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:58.027 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:58.027 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:58.027 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:58.286 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:58.286 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:08:58.286 00:08:58.286 --- 10.0.0.3 ping statistics --- 00:08:58.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.286 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:58.286 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:58.286 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:08:58.286 00:08:58.286 --- 10.0.0.4 ping statistics --- 00:08:58.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.286 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:58.286 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:58.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:58.286 00:08:58.286 --- 10.0.0.1 ping statistics --- 00:08:58.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.287 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:58.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:08:58.287 00:08:58.287 --- 10.0.0.2 ping statistics --- 00:08:58.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.287 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=75095 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 75095 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 75095 ']' 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.287 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.287 [2024-11-26 16:14:23.910211] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:08:58.287 [2024-11-26 16:14:23.910295] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.545 [2024-11-26 16:14:24.053055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.545 [2024-11-26 16:14:24.073235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.545 [2024-11-26 16:14:24.073300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.545 [2024-11-26 16:14:24.073311] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.545 [2024-11-26 16:14:24.073318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.545 [2024-11-26 16:14:24.073325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.545 [2024-11-26 16:14:24.073673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.545 [2024-11-26 16:14:24.103526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.545 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.545 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:58.545 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:58.545 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:58.545 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.805 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.805 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:59.064 [2024-11-26 16:14:24.516154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.064 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:59.064 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.064 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.064 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.064 ************************************ 00:08:59.064 START TEST lvs_grow_clean 00:08:59.064 ************************************ 00:08:59.064 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:59.064 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:59.064 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:59.064 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:59.064 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:59.064 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:59.064 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:59.064 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:59.064 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:59.064 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:59.324 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:59.324 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:59.584 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6310c5a8-af12-4108-b7bf-5622f056e2bd 00:08:59.584 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6310c5a8-af12-4108-b7bf-5622f056e2bd 00:08:59.584 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:59.843 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:59.843 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:59.843 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6310c5a8-af12-4108-b7bf-5622f056e2bd lvol 150 00:09:00.102 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a73aec60-cbd3-4e94-9ca6-2955560b3578 00:09:00.102 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:00.102 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:00.363 [2024-11-26 16:14:25.909967] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:00.363 [2024-11-26 16:14:25.910072] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:00.363 true 00:09:00.364 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6310c5a8-af12-4108-b7bf-5622f056e2bd 00:09:00.364 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:00.628 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:00.628 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:00.886 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a73aec60-cbd3-4e94-9ca6-2955560b3578 00:09:01.145 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:01.404 [2024-11-26 16:14:26.994635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:01.404 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:01.664 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:01.664 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=75170 00:09:01.664 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:01.664 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 75170 /var/tmp/bdevperf.sock 00:09:01.664 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 75170 ']' 00:09:01.664 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:01.664 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:01.664 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:01.664 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.664 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:01.923 [2024-11-26 16:14:27.335760] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:09:01.923 [2024-11-26 16:14:27.335842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75170 ] 00:09:01.923 [2024-11-26 16:14:27.478450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.923 [2024-11-26 16:14:27.498277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.923 [2024-11-26 16:14:27.527345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.923 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.923 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:01.923 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:02.491 Nvme0n1 00:09:02.491 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:02.491 [ 00:09:02.491 { 00:09:02.491 "name": "Nvme0n1", 00:09:02.491 "aliases": [ 00:09:02.491 "a73aec60-cbd3-4e94-9ca6-2955560b3578" 00:09:02.491 ], 00:09:02.491 "product_name": "NVMe disk", 00:09:02.491 "block_size": 4096, 00:09:02.491 "num_blocks": 38912, 00:09:02.491 "uuid": "a73aec60-cbd3-4e94-9ca6-2955560b3578", 00:09:02.491 "numa_id": -1, 00:09:02.491 "assigned_rate_limits": { 00:09:02.491 "rw_ios_per_sec": 0, 00:09:02.491 "rw_mbytes_per_sec": 0, 00:09:02.491 "r_mbytes_per_sec": 0, 00:09:02.491 "w_mbytes_per_sec": 0 00:09:02.491 }, 00:09:02.491 "claimed": false, 00:09:02.491 "zoned": false, 00:09:02.491 "supported_io_types": { 00:09:02.491 "read": true, 00:09:02.491 "write": true, 00:09:02.491 "unmap": true, 00:09:02.491 "flush": true, 00:09:02.491 "reset": true, 00:09:02.491 "nvme_admin": true, 00:09:02.491 "nvme_io": true, 00:09:02.491 "nvme_io_md": false, 00:09:02.491 "write_zeroes": true, 00:09:02.491 "zcopy": false, 00:09:02.491 "get_zone_info": false, 00:09:02.491 "zone_management": false, 00:09:02.491 "zone_append": false, 00:09:02.491 "compare": true, 00:09:02.491 "compare_and_write": true, 00:09:02.491 "abort": true, 00:09:02.491 "seek_hole": false, 00:09:02.491 "seek_data": false, 00:09:02.492 "copy": true, 00:09:02.492 "nvme_iov_md": false 00:09:02.492 }, 00:09:02.492 "memory_domains": [ 00:09:02.492 { 00:09:02.492 "dma_device_id": "system", 00:09:02.492 "dma_device_type": 1 00:09:02.492 } 00:09:02.492 ], 00:09:02.492 "driver_specific": { 00:09:02.492 "nvme": [ 00:09:02.492 { 00:09:02.492 "trid": { 00:09:02.492 "trtype": "TCP", 00:09:02.492 "adrfam": "IPv4", 00:09:02.492 "traddr": "10.0.0.3", 00:09:02.492 "trsvcid": "4420", 00:09:02.492 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:02.492 }, 00:09:02.492 "ctrlr_data": { 00:09:02.492 "cntlid": 1, 00:09:02.492 "vendor_id": "0x8086", 00:09:02.492 "model_number": "SPDK bdev Controller", 00:09:02.492 "serial_number": "SPDK0", 00:09:02.492 "firmware_revision": "25.01", 00:09:02.492 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:02.492 "oacs": { 00:09:02.492 "security": 0, 00:09:02.492 "format": 0, 00:09:02.492 "firmware": 0, 00:09:02.492 "ns_manage": 0 00:09:02.492 }, 00:09:02.492 "multi_ctrlr": true, 00:09:02.492 "ana_reporting": false 00:09:02.492 }, 00:09:02.492 "vs": { 00:09:02.492 "nvme_version": "1.3" 00:09:02.492 }, 00:09:02.492 "ns_data": { 00:09:02.492 "id": 1, 00:09:02.492 "can_share": true 00:09:02.492 } 00:09:02.492 } 00:09:02.492 ], 00:09:02.492 "mp_policy": "active_passive" 00:09:02.492 } 00:09:02.492 } 00:09:02.492 ] 00:09:02.492 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=75186 00:09:02.492 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:02.492 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:02.751 Running I/O for 10 seconds... 00:09:03.688 Latency(us) 00:09:03.688 [2024-11-26T16:14:29.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.689 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:03.689 [2024-11-26T16:14:29.342Z] =================================================================================================================== 00:09:03.689 [2024-11-26T16:14:29.342Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:03.689 00:09:04.625 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6310c5a8-af12-4108-b7bf-5622f056e2bd 00:09:04.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.625 Nvme0n1 : 2.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:04.625 [2024-11-26T16:14:30.278Z] =================================================================================================================== 00:09:04.625 [2024-11-26T16:14:30.278Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:04.625 00:09:04.884 true 00:09:04.884 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6310c5a8-af12-4108-b7bf-5622f056e2bd 00:09:04.884 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:05.453 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:05.453 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:05.453 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 75186 00:09:05.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.712 Nvme0n1 : 3.00 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:09:05.712 [2024-11-26T16:14:31.365Z] =================================================================================================================== 00:09:05.712 [2024-11-26T16:14:31.365Z] Total : 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:09:05.712 00:09:06.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.649 Nvme0n1 : 4.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:06.649 [2024-11-26T16:14:32.302Z] =================================================================================================================== 00:09:06.649 [2024-11-26T16:14:32.302Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:06.649 00:09:07.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.606 Nvme0n1 : 5.00 6375.40 24.90 0.00 0.00 0.00 0.00 0.00 00:09:07.606 [2024-11-26T16:14:33.259Z] =================================================================================================================== 00:09:07.606 [2024-11-26T16:14:33.259Z] Total : 6375.40 24.90 0.00 0.00 0.00 0.00 0.00 00:09:07.606 00:09:08.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.983 Nvme0n1 : 6.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:09:08.983 [2024-11-26T16:14:34.636Z] =================================================================================================================== 00:09:08.983 [2024-11-26T16:14:34.636Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:09:08.983 00:09:09.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.919 Nvme0n1 : 7.00 6313.71 24.66 0.00 0.00 0.00 0.00 0.00 00:09:09.919 [2024-11-26T16:14:35.572Z] =================================================================================================================== 00:09:09.919 [2024-11-26T16:14:35.572Z] Total : 6313.71 24.66 0.00 0.00 0.00 0.00 0.00 00:09:09.919 00:09:10.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.856 Nvme0n1 : 8.00 6286.50 24.56 0.00 0.00 0.00 0.00 0.00 00:09:10.856 [2024-11-26T16:14:36.509Z] =================================================================================================================== 00:09:10.856 [2024-11-26T16:14:36.509Z] Total : 6286.50 24.56 0.00 0.00 0.00 0.00 0.00 00:09:10.856 00:09:11.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.791 Nvme0n1 : 9.00 6265.33 24.47 0.00 0.00 0.00 0.00 0.00 00:09:11.791 [2024-11-26T16:14:37.444Z] =================================================================================================================== 00:09:11.791 [2024-11-26T16:14:37.444Z] Total : 6265.33 24.47 0.00 0.00 0.00 0.00 0.00 00:09:11.791 00:09:12.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.728 Nvme0n1 : 10.00 6235.70 24.36 0.00 0.00 0.00 0.00 0.00 00:09:12.728 [2024-11-26T16:14:38.381Z] =================================================================================================================== 00:09:12.728 [2024-11-26T16:14:38.381Z] Total : 6235.70 24.36 0.00 0.00 0.00 0.00 0.00 00:09:12.728 00:09:12.728 00:09:12.728 Latency(us) 00:09:12.728 [2024-11-26T16:14:38.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.728 Nvme0n1 : 10.01 6242.63 24.39 0.00 0.00 20498.61 17396.83 44326.17 00:09:12.728 [2024-11-26T16:14:38.381Z] =================================================================================================================== 00:09:12.728 [2024-11-26T16:14:38.381Z] Total : 6242.63 24.39 0.00 0.00 20498.61 17396.83 44326.17 00:09:12.728 { 00:09:12.728 "results": [ 00:09:12.728 { 00:09:12.728 "job": "Nvme0n1", 00:09:12.728 "core_mask": "0x2", 00:09:12.728 "workload": "randwrite", 00:09:12.728 "status": "finished", 00:09:12.728 "queue_depth": 128, 00:09:12.728 "io_size": 4096, 00:09:12.728 "runtime": 10.009406, 00:09:12.728 "iops": 6242.6281839301955, 00:09:12.728 "mibps": 24.385266343477326, 00:09:12.728 "io_failed": 0, 00:09:12.728 "io_timeout": 0, 00:09:12.728 "avg_latency_us": 20498.607395578576, 00:09:12.728 "min_latency_us": 17396.82909090909, 00:09:12.728 "max_latency_us": 44326.167272727274 00:09:12.728 } 00:09:12.728 ], 00:09:12.728 "core_count": 1 00:09:12.728 } 00:09:12.728 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 75170 00:09:12.728 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 75170 ']' 00:09:12.728 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 75170 00:09:12.728 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:12.728 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.728 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75170 00:09:12.728 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:12.728 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:12.728 killing process with pid 75170 00:09:12.728 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75170' 00:09:12.728 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 75170 00:09:12.728 Received shutdown signal, test time was about 10.000000 seconds 00:09:12.728 00:09:12.728 Latency(us) 00:09:12.728 [2024-11-26T16:14:38.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.728 [2024-11-26T16:14:38.381Z] =================================================================================================================== 00:09:12.728 [2024-11-26T16:14:38.381Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:12.728 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 75170 00:09:12.987 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:13.246 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:13.505 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6310c5a8-af12-4108-b7bf-5622f056e2bd 00:09:13.505 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:13.763 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:13.763 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:13.763 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:14.025 [2024-11-26 16:14:39.643100] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:14.284 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6310c5a8-af12-4108-b7bf-5622f056e2bd 00:09:14.284 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:14.284 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6310c5a8-af12-4108-b7bf-5622f056e2bd 00:09:14.284 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.284 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.284 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.284 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.284 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.284 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.284 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.284 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:14.284 16:14:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6310c5a8-af12-4108-b7bf-5622f056e2bd 00:09:14.543 request: 00:09:14.543 { 00:09:14.543 "uuid": "6310c5a8-af12-4108-b7bf-5622f056e2bd", 00:09:14.543 "method": "bdev_lvol_get_lvstores", 00:09:14.543 "req_id": 1 00:09:14.543 } 00:09:14.543 Got JSON-RPC error response 00:09:14.543 response: 00:09:14.543 { 00:09:14.543 "code": -19, 00:09:14.543 "message": "No such device" 00:09:14.543 } 00:09:14.543 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:14.543 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:14.543 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:14.543 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:14.543 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.802 aio_bdev 00:09:14.802 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a73aec60-cbd3-4e94-9ca6-2955560b3578 00:09:14.802 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a73aec60-cbd3-4e94-9ca6-2955560b3578 00:09:14.802 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.802 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:14.802 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.802 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.802 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:15.061 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a73aec60-cbd3-4e94-9ca6-2955560b3578 -t 2000 00:09:15.320 [ 00:09:15.320 { 00:09:15.320 "name": "a73aec60-cbd3-4e94-9ca6-2955560b3578", 00:09:15.320 "aliases": [ 00:09:15.320 "lvs/lvol" 00:09:15.320 ], 00:09:15.320 "product_name": "Logical Volume", 00:09:15.320 "block_size": 4096, 00:09:15.320 "num_blocks": 38912, 00:09:15.320 "uuid": "a73aec60-cbd3-4e94-9ca6-2955560b3578", 00:09:15.320 "assigned_rate_limits": { 00:09:15.320 "rw_ios_per_sec": 0, 00:09:15.320 "rw_mbytes_per_sec": 0, 00:09:15.320 "r_mbytes_per_sec": 0, 00:09:15.320 "w_mbytes_per_sec": 0 00:09:15.320 }, 00:09:15.320 "claimed": false, 00:09:15.320 "zoned": false, 00:09:15.320 "supported_io_types": { 00:09:15.320 "read": true, 00:09:15.320 "write": true, 00:09:15.320 "unmap": true, 00:09:15.320 "flush": false, 00:09:15.320 "reset": true, 00:09:15.320 "nvme_admin": false, 00:09:15.320 "nvme_io": false, 00:09:15.320 "nvme_io_md": false, 00:09:15.320 "write_zeroes": true, 00:09:15.320 "zcopy": false, 00:09:15.320 "get_zone_info": false, 00:09:15.320 "zone_management": false, 00:09:15.320 "zone_append": false, 00:09:15.320 "compare": false, 00:09:15.320 "compare_and_write": false, 00:09:15.320 "abort": false, 00:09:15.320 "seek_hole": true, 00:09:15.320 "seek_data": true, 00:09:15.320 "copy": false, 00:09:15.320 "nvme_iov_md": false 00:09:15.320 }, 00:09:15.320 "driver_specific": { 00:09:15.320 "lvol": { 00:09:15.320 "lvol_store_uuid": "6310c5a8-af12-4108-b7bf-5622f056e2bd", 00:09:15.320 "base_bdev": "aio_bdev", 00:09:15.320 "thin_provision": false, 00:09:15.320 "num_allocated_clusters": 38, 00:09:15.320 "snapshot": false, 00:09:15.320 "clone": false, 00:09:15.320 "esnap_clone": false 00:09:15.320 } 00:09:15.320 } 00:09:15.320 } 00:09:15.320 ] 00:09:15.320 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:15.320 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:15.320 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6310c5a8-af12-4108-b7bf-5622f056e2bd 00:09:15.887 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:15.887 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6310c5a8-af12-4108-b7bf-5622f056e2bd 00:09:15.887 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:15.887 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:15.887 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a73aec60-cbd3-4e94-9ca6-2955560b3578 00:09:16.455 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6310c5a8-af12-4108-b7bf-5622f056e2bd 00:09:16.714 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:16.973 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:17.232 00:09:17.232 real 0m18.315s 00:09:17.232 user 0m17.118s 00:09:17.232 sys 0m2.519s 00:09:17.232 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.232 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:17.232 ************************************ 00:09:17.232 END TEST lvs_grow_clean 00:09:17.232 ************************************ 00:09:17.491 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:17.491 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:17.491 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.491 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:17.491 ************************************ 00:09:17.491 START TEST lvs_grow_dirty 00:09:17.491 ************************************ 00:09:17.491 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:17.491 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:17.491 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:17.491 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:17.491 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:17.491 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:17.491 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:17.491 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:17.491 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:17.491 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:17.750 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:17.750 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:18.009 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=25fa8480-9f07-45dd-99cb-a119ea18acc8 00:09:18.009 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:18.009 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25fa8480-9f07-45dd-99cb-a119ea18acc8 00:09:18.268 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:18.268 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:18.268 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 25fa8480-9f07-45dd-99cb-a119ea18acc8 lvol 150 00:09:18.836 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b9956149-7382-4ce7-98cf-74b3efd2d084 00:09:18.836 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:18.836 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:18.836 [2024-11-26 16:14:44.480490] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:18.836 [2024-11-26 16:14:44.480571] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:19.094 true 00:09:19.094 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:19.094 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25fa8480-9f07-45dd-99cb-a119ea18acc8 00:09:19.354 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:19.354 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:19.614 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b9956149-7382-4ce7-98cf-74b3efd2d084 00:09:19.614 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:19.874 [2024-11-26 16:14:45.481229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:19.874 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:20.134 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:20.134 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=75445 00:09:20.134 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:20.134 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 75445 /var/tmp/bdevperf.sock 00:09:20.134 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 75445 ']' 00:09:20.134 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:20.134 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:20.134 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:20.134 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.134 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:20.393 [2024-11-26 16:14:45.788435] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:09:20.393 [2024-11-26 16:14:45.788542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75445 ] 00:09:20.393 [2024-11-26 16:14:45.934920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.393 [2024-11-26 16:14:45.959941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.393 [2024-11-26 16:14:45.994194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:20.652 16:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.652 16:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:20.652 16:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:20.911 Nvme0n1 00:09:20.911 16:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:21.170 [ 00:09:21.170 { 00:09:21.170 "name": "Nvme0n1", 00:09:21.170 "aliases": [ 00:09:21.170 "b9956149-7382-4ce7-98cf-74b3efd2d084" 00:09:21.170 ], 00:09:21.170 "product_name": "NVMe disk", 00:09:21.170 "block_size": 4096, 00:09:21.170 "num_blocks": 38912, 00:09:21.170 "uuid": "b9956149-7382-4ce7-98cf-74b3efd2d084", 00:09:21.170 "numa_id": -1, 00:09:21.170 "assigned_rate_limits": { 00:09:21.170 "rw_ios_per_sec": 0, 00:09:21.170 "rw_mbytes_per_sec": 0, 00:09:21.170 "r_mbytes_per_sec": 0, 00:09:21.170 "w_mbytes_per_sec": 0 00:09:21.170 }, 00:09:21.170 "claimed": false, 00:09:21.170 "zoned": false, 00:09:21.170 "supported_io_types": { 00:09:21.170 "read": true, 00:09:21.170 "write": true, 00:09:21.170 "unmap": true, 00:09:21.170 "flush": true, 00:09:21.170 "reset": true, 00:09:21.170 "nvme_admin": true, 00:09:21.170 "nvme_io": true, 00:09:21.170 "nvme_io_md": false, 00:09:21.170 "write_zeroes": true, 00:09:21.170 "zcopy": false, 00:09:21.170 "get_zone_info": false, 00:09:21.170 "zone_management": false, 00:09:21.170 "zone_append": false, 00:09:21.170 "compare": true, 00:09:21.170 "compare_and_write": true, 00:09:21.170 "abort": true, 00:09:21.170 "seek_hole": false, 00:09:21.170 "seek_data": false, 00:09:21.170 "copy": true, 00:09:21.170 "nvme_iov_md": false 00:09:21.170 }, 00:09:21.170 "memory_domains": [ 00:09:21.170 { 00:09:21.170 "dma_device_id": "system", 00:09:21.170 "dma_device_type": 1 00:09:21.170 } 00:09:21.170 ], 00:09:21.170 "driver_specific": { 00:09:21.170 "nvme": [ 00:09:21.170 { 00:09:21.170 "trid": { 00:09:21.170 "trtype": "TCP", 00:09:21.170 "adrfam": "IPv4", 00:09:21.170 "traddr": "10.0.0.3", 00:09:21.170 "trsvcid": "4420", 00:09:21.170 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:21.170 }, 00:09:21.170 "ctrlr_data": { 00:09:21.170 "cntlid": 1, 00:09:21.170 "vendor_id": "0x8086", 00:09:21.170 "model_number": "SPDK bdev Controller", 00:09:21.170 "serial_number": "SPDK0", 00:09:21.170 "firmware_revision": "25.01", 00:09:21.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:21.170 "oacs": { 00:09:21.170 "security": 0, 00:09:21.170 "format": 0, 00:09:21.170 "firmware": 0, 00:09:21.170 "ns_manage": 0 00:09:21.170 }, 00:09:21.170 "multi_ctrlr": true, 00:09:21.170 "ana_reporting": false 00:09:21.170 }, 00:09:21.170 "vs": { 00:09:21.170 "nvme_version": "1.3" 00:09:21.170 }, 00:09:21.170 "ns_data": { 00:09:21.170 "id": 1, 00:09:21.170 "can_share": true 00:09:21.170 } 00:09:21.170 } 00:09:21.170 ], 00:09:21.170 "mp_policy": "active_passive" 00:09:21.170 } 00:09:21.170 } 00:09:21.170 ] 00:09:21.170 16:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=75461 00:09:21.170 16:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:21.170 16:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:21.170 Running I/O for 10 seconds... 00:09:22.548 Latency(us) 00:09:22.548 [2024-11-26T16:14:48.201Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.548 Nvme0n1 : 1.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:22.548 [2024-11-26T16:14:48.201Z] =================================================================================================================== 00:09:22.548 [2024-11-26T16:14:48.201Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:22.548 00:09:23.116 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 25fa8480-9f07-45dd-99cb-a119ea18acc8 00:09:23.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.375 Nvme0n1 : 2.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:23.375 [2024-11-26T16:14:49.028Z] =================================================================================================================== 00:09:23.375 [2024-11-26T16:14:49.028Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:23.375 00:09:23.635 true 00:09:23.635 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25fa8480-9f07-45dd-99cb-a119ea18acc8 00:09:23.635 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:23.893 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:23.893 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:23.893 16:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 75461 00:09:24.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.152 Nvme0n1 : 3.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:09:24.152 [2024-11-26T16:14:49.805Z] =================================================================================================================== 00:09:24.152 [2024-11-26T16:14:49.805Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:09:24.152 00:09:25.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.530 Nvme0n1 : 4.00 6316.75 24.67 0.00 0.00 0.00 0.00 0.00 00:09:25.530 [2024-11-26T16:14:51.183Z] =================================================================================================================== 00:09:25.530 [2024-11-26T16:14:51.183Z] Total : 6316.75 24.67 0.00 0.00 0.00 0.00 0.00 00:09:25.530 00:09:26.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.466 Nvme0n1 : 5.00 6257.00 24.44 0.00 0.00 0.00 0.00 0.00 00:09:26.466 [2024-11-26T16:14:52.119Z] =================================================================================================================== 00:09:26.466 [2024-11-26T16:14:52.119Z] Total : 6257.00 24.44 0.00 0.00 0.00 0.00 0.00 00:09:26.466 00:09:27.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.403 Nvme0n1 : 6.00 6209.00 24.25 0.00 0.00 0.00 0.00 0.00 00:09:27.403 [2024-11-26T16:14:53.056Z] =================================================================================================================== 00:09:27.403 [2024-11-26T16:14:53.056Z] Total : 6209.00 24.25 0.00 0.00 0.00 0.00 0.00 00:09:27.403 00:09:28.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.339 Nvme0n1 : 7.00 6211.00 24.26 0.00 0.00 0.00 0.00 0.00 00:09:28.339 [2024-11-26T16:14:53.992Z] =================================================================================================================== 00:09:28.339 [2024-11-26T16:14:53.992Z] Total : 6211.00 24.26 0.00 0.00 0.00 0.00 0.00 00:09:28.339 00:09:29.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.274 Nvme0n1 : 8.00 6173.25 24.11 0.00 0.00 0.00 0.00 0.00 00:09:29.274 [2024-11-26T16:14:54.927Z] =================================================================================================================== 00:09:29.274 [2024-11-26T16:14:54.927Z] Total : 6173.25 24.11 0.00 0.00 0.00 0.00 0.00 00:09:29.274 00:09:30.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.211 Nvme0n1 : 9.00 6150.56 24.03 0.00 0.00 0.00 0.00 0.00 00:09:30.211 [2024-11-26T16:14:55.864Z] =================================================================================================================== 00:09:30.211 [2024-11-26T16:14:55.864Z] Total : 6150.56 24.03 0.00 0.00 0.00 0.00 0.00 00:09:30.211 00:09:31.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.148 Nvme0n1 : 10.00 6145.10 24.00 0.00 0.00 0.00 0.00 0.00 00:09:31.148 [2024-11-26T16:14:56.801Z] =================================================================================================================== 00:09:31.148 [2024-11-26T16:14:56.801Z] Total : 6145.10 24.00 0.00 0.00 0.00 0.00 0.00 00:09:31.148 00:09:31.148 00:09:31.148 Latency(us) 00:09:31.148 [2024-11-26T16:14:56.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.148 Nvme0n1 : 10.01 6150.61 24.03 0.00 0.00 20805.48 12034.79 94371.84 00:09:31.148 [2024-11-26T16:14:56.801Z] =================================================================================================================== 00:09:31.148 [2024-11-26T16:14:56.801Z] Total : 6150.61 24.03 0.00 0.00 20805.48 12034.79 94371.84 00:09:31.148 { 00:09:31.148 "results": [ 00:09:31.148 { 00:09:31.148 "job": "Nvme0n1", 00:09:31.148 "core_mask": "0x2", 00:09:31.148 "workload": "randwrite", 00:09:31.148 "status": "finished", 00:09:31.148 "queue_depth": 128, 00:09:31.148 "io_size": 4096, 00:09:31.148 "runtime": 10.011845, 00:09:31.148 "iops": 6150.614597009841, 00:09:31.148 "mibps": 24.025838269569693, 00:09:31.148 "io_failed": 0, 00:09:31.148 "io_timeout": 0, 00:09:31.148 "avg_latency_us": 20805.477225913794, 00:09:31.148 "min_latency_us": 12034.792727272727, 00:09:31.148 "max_latency_us": 94371.84 00:09:31.148 } 00:09:31.148 ], 00:09:31.148 "core_count": 1 00:09:31.148 } 00:09:31.408 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 75445 00:09:31.408 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 75445 ']' 00:09:31.408 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 75445 00:09:31.408 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:31.408 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.408 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75445 00:09:31.408 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:31.408 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:31.408 killing process with pid 75445 00:09:31.408 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75445' 00:09:31.408 Received shutdown signal, test time was about 10.000000 seconds 00:09:31.408 00:09:31.408 Latency(us) 00:09:31.408 [2024-11-26T16:14:57.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.408 [2024-11-26T16:14:57.061Z] =================================================================================================================== 00:09:31.408 [2024-11-26T16:14:57.061Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:31.408 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 75445 00:09:31.408 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 75445 00:09:31.408 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:31.668 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:31.927 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25fa8480-9f07-45dd-99cb-a119ea18acc8 00:09:31.927 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:32.186 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:32.186 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:32.186 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 75095 00:09:32.186 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 75095 00:09:32.187 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 75095 Killed "${NVMF_APP[@]}" "$@" 00:09:32.187 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:32.187 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:32.187 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:32.187 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:32.187 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:32.187 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=75595 00:09:32.187 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:32.187 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 75595 00:09:32.187 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 75595 ']' 00:09:32.187 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.187 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.187 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.187 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.187 16:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:32.187 [2024-11-26 16:14:57.785902] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:09:32.187 [2024-11-26 16:14:57.785984] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.446 [2024-11-26 16:14:57.925621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.446 [2024-11-26 16:14:57.944000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.446 [2024-11-26 16:14:57.944070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.446 [2024-11-26 16:14:57.944096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.446 [2024-11-26 16:14:57.944103] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.446 [2024-11-26 16:14:57.944109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.446 [2024-11-26 16:14:57.944439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.446 [2024-11-26 16:14:57.973019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.446 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.446 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:32.446 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:32.446 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:32.446 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:32.446 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.446 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:32.705 [2024-11-26 16:14:58.267413] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:32.705 [2024-11-26 16:14:58.267684] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:32.705 [2024-11-26 16:14:58.267882] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:32.705 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:32.705 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b9956149-7382-4ce7-98cf-74b3efd2d084 00:09:32.705 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b9956149-7382-4ce7-98cf-74b3efd2d084 00:09:32.705 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.705 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:32.705 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.705 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.705 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:32.965 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b9956149-7382-4ce7-98cf-74b3efd2d084 -t 2000 00:09:33.224 [ 00:09:33.224 { 00:09:33.224 "name": "b9956149-7382-4ce7-98cf-74b3efd2d084", 00:09:33.224 "aliases": [ 00:09:33.224 "lvs/lvol" 00:09:33.224 ], 00:09:33.224 "product_name": "Logical Volume", 00:09:33.224 "block_size": 4096, 00:09:33.224 "num_blocks": 38912, 00:09:33.224 "uuid": "b9956149-7382-4ce7-98cf-74b3efd2d084", 00:09:33.224 "assigned_rate_limits": { 00:09:33.224 "rw_ios_per_sec": 0, 00:09:33.224 "rw_mbytes_per_sec": 0, 00:09:33.224 "r_mbytes_per_sec": 0, 00:09:33.224 "w_mbytes_per_sec": 0 00:09:33.224 }, 00:09:33.224 "claimed": false, 00:09:33.224 "zoned": false, 00:09:33.224 "supported_io_types": { 00:09:33.224 "read": true, 00:09:33.224 "write": true, 00:09:33.224 "unmap": true, 00:09:33.224 "flush": false, 00:09:33.224 "reset": true, 00:09:33.224 "nvme_admin": false, 00:09:33.224 "nvme_io": false, 00:09:33.224 "nvme_io_md": false, 00:09:33.224 "write_zeroes": true, 00:09:33.224 "zcopy": false, 00:09:33.224 "get_zone_info": false, 00:09:33.224 "zone_management": false, 00:09:33.224 "zone_append": false, 00:09:33.224 "compare": false, 00:09:33.224 "compare_and_write": false, 00:09:33.224 "abort": false, 00:09:33.224 "seek_hole": true, 00:09:33.224 "seek_data": true, 00:09:33.224 "copy": false, 00:09:33.224 "nvme_iov_md": false 00:09:33.224 }, 00:09:33.224 "driver_specific": { 00:09:33.224 "lvol": { 00:09:33.224 "lvol_store_uuid": "25fa8480-9f07-45dd-99cb-a119ea18acc8", 00:09:33.224 "base_bdev": "aio_bdev", 00:09:33.224 "thin_provision": false, 00:09:33.224 "num_allocated_clusters": 38, 00:09:33.224 "snapshot": false, 00:09:33.225 "clone": false, 00:09:33.225 "esnap_clone": false 00:09:33.225 } 00:09:33.225 } 00:09:33.225 } 00:09:33.225 ] 00:09:33.484 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:33.484 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25fa8480-9f07-45dd-99cb-a119ea18acc8 00:09:33.484 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:33.484 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:33.484 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25fa8480-9f07-45dd-99cb-a119ea18acc8 00:09:33.484 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:33.742 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:33.742 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:34.001 [2024-11-26 16:14:59.537148] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:34.001 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25fa8480-9f07-45dd-99cb-a119ea18acc8 00:09:34.001 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:34.001 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25fa8480-9f07-45dd-99cb-a119ea18acc8 00:09:34.001 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.001 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.001 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.001 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.001 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.001 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.001 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.001 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:34.001 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25fa8480-9f07-45dd-99cb-a119ea18acc8 00:09:34.260 request: 00:09:34.260 { 00:09:34.260 "uuid": "25fa8480-9f07-45dd-99cb-a119ea18acc8", 00:09:34.260 "method": "bdev_lvol_get_lvstores", 00:09:34.260 "req_id": 1 00:09:34.260 } 00:09:34.260 Got JSON-RPC error response 00:09:34.260 response: 00:09:34.260 { 00:09:34.260 "code": -19, 00:09:34.260 "message": "No such device" 00:09:34.260 } 00:09:34.260 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:34.260 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:34.260 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:34.260 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:34.260 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:34.528 aio_bdev 00:09:34.528 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b9956149-7382-4ce7-98cf-74b3efd2d084 00:09:34.528 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b9956149-7382-4ce7-98cf-74b3efd2d084 00:09:34.528 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.528 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:34.528 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.528 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.528 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:34.799 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b9956149-7382-4ce7-98cf-74b3efd2d084 -t 2000 00:09:35.058 [ 00:09:35.058 { 00:09:35.058 "name": "b9956149-7382-4ce7-98cf-74b3efd2d084", 00:09:35.058 "aliases": [ 00:09:35.058 "lvs/lvol" 00:09:35.058 ], 00:09:35.058 "product_name": "Logical Volume", 00:09:35.058 "block_size": 4096, 00:09:35.058 "num_blocks": 38912, 00:09:35.058 "uuid": "b9956149-7382-4ce7-98cf-74b3efd2d084", 00:09:35.058 "assigned_rate_limits": { 00:09:35.058 "rw_ios_per_sec": 0, 00:09:35.058 "rw_mbytes_per_sec": 0, 00:09:35.058 "r_mbytes_per_sec": 0, 00:09:35.058 "w_mbytes_per_sec": 0 00:09:35.058 }, 00:09:35.058 "claimed": false, 00:09:35.058 "zoned": false, 00:09:35.058 "supported_io_types": { 00:09:35.058 "read": true, 00:09:35.058 "write": true, 00:09:35.058 "unmap": true, 00:09:35.058 "flush": false, 00:09:35.058 "reset": true, 00:09:35.058 "nvme_admin": false, 00:09:35.058 "nvme_io": false, 00:09:35.058 "nvme_io_md": false, 00:09:35.058 "write_zeroes": true, 00:09:35.058 "zcopy": false, 00:09:35.058 "get_zone_info": false, 00:09:35.058 "zone_management": false, 00:09:35.058 "zone_append": false, 00:09:35.058 "compare": false, 00:09:35.058 "compare_and_write": false, 00:09:35.058 "abort": false, 00:09:35.058 "seek_hole": true, 00:09:35.058 "seek_data": true, 00:09:35.058 "copy": false, 00:09:35.058 "nvme_iov_md": false 00:09:35.058 }, 00:09:35.058 "driver_specific": { 00:09:35.058 "lvol": { 00:09:35.058 "lvol_store_uuid": "25fa8480-9f07-45dd-99cb-a119ea18acc8", 00:09:35.058 "base_bdev": "aio_bdev", 00:09:35.058 "thin_provision": false, 00:09:35.058 "num_allocated_clusters": 38, 00:09:35.058 "snapshot": false, 00:09:35.058 "clone": false, 00:09:35.058 "esnap_clone": false 00:09:35.058 } 00:09:35.058 } 00:09:35.058 } 00:09:35.058 ] 00:09:35.058 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:35.058 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25fa8480-9f07-45dd-99cb-a119ea18acc8 00:09:35.058 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:35.317 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:35.317 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25fa8480-9f07-45dd-99cb-a119ea18acc8 00:09:35.317 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:35.576 16:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:35.576 16:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b9956149-7382-4ce7-98cf-74b3efd2d084 00:09:35.835 16:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 25fa8480-9f07-45dd-99cb-a119ea18acc8 00:09:36.094 16:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:36.353 16:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:36.921 00:09:36.921 real 0m19.395s 00:09:36.921 user 0m39.775s 00:09:36.921 sys 0m9.328s 00:09:36.921 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.921 ************************************ 00:09:36.921 END TEST lvs_grow_dirty 00:09:36.921 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:36.921 ************************************ 00:09:36.921 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:36.921 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:36.921 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:36.921 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:36.921 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:36.921 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:36.921 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:36.921 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:36.921 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:36.921 nvmf_trace.0 00:09:36.921 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:36.921 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:36.922 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:36.922 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:37.180 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:37.180 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:37.180 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:37.180 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:37.180 rmmod nvme_tcp 00:09:37.180 rmmod nvme_fabrics 00:09:37.440 rmmod nvme_keyring 00:09:37.440 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:37.440 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:37.440 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:37.440 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 75595 ']' 00:09:37.440 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 75595 00:09:37.440 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 75595 ']' 00:09:37.440 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 75595 00:09:37.440 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:37.440 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.440 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75595 00:09:37.440 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.440 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.440 killing process with pid 75595 00:09:37.440 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75595' 00:09:37.440 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 75595 00:09:37.440 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 75595 00:09:37.440 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:37.440 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:37.440 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:37.440 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:37.440 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:37.440 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:37.440 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:37.440 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:37.440 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:37.440 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:37.440 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:37.440 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:37.700 00:09:37.700 real 0m40.092s 00:09:37.700 user 1m2.843s 00:09:37.700 sys 0m12.842s 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:37.700 ************************************ 00:09:37.700 END TEST nvmf_lvs_grow 00:09:37.700 ************************************ 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.700 ************************************ 00:09:37.700 START TEST nvmf_bdev_io_wait 00:09:37.700 ************************************ 00:09:37.700 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:37.960 * Looking for test storage... 00:09:37.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:37.960 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:37.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.961 --rc genhtml_branch_coverage=1 00:09:37.961 --rc genhtml_function_coverage=1 00:09:37.961 --rc genhtml_legend=1 00:09:37.961 --rc geninfo_all_blocks=1 00:09:37.961 --rc geninfo_unexecuted_blocks=1 00:09:37.961 00:09:37.961 ' 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:37.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.961 --rc genhtml_branch_coverage=1 00:09:37.961 --rc genhtml_function_coverage=1 00:09:37.961 --rc genhtml_legend=1 00:09:37.961 --rc geninfo_all_blocks=1 00:09:37.961 --rc geninfo_unexecuted_blocks=1 00:09:37.961 00:09:37.961 ' 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:37.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.961 --rc genhtml_branch_coverage=1 00:09:37.961 --rc genhtml_function_coverage=1 00:09:37.961 --rc genhtml_legend=1 00:09:37.961 --rc geninfo_all_blocks=1 00:09:37.961 --rc geninfo_unexecuted_blocks=1 00:09:37.961 00:09:37.961 ' 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:37.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.961 --rc genhtml_branch_coverage=1 00:09:37.961 --rc genhtml_function_coverage=1 00:09:37.961 --rc genhtml_legend=1 00:09:37.961 --rc geninfo_all_blocks=1 00:09:37.961 --rc geninfo_unexecuted_blocks=1 00:09:37.961 00:09:37.961 ' 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.961 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:37.961 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:37.962 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:37.962 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:37.962 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:37.962 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.962 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:37.962 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:37.962 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:37.962 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:37.962 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:37.962 Cannot find device "nvmf_init_br" 00:09:37.962 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:37.962 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:37.962 Cannot find device "nvmf_init_br2" 00:09:37.962 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:37.962 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:37.962 Cannot find device "nvmf_tgt_br" 00:09:37.962 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:37.962 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:38.221 Cannot find device "nvmf_tgt_br2" 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:38.221 Cannot find device "nvmf_init_br" 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:38.221 Cannot find device "nvmf_init_br2" 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:38.221 Cannot find device "nvmf_tgt_br" 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:38.221 Cannot find device "nvmf_tgt_br2" 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:38.221 Cannot find device "nvmf_br" 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:38.221 Cannot find device "nvmf_init_if" 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:38.221 Cannot find device "nvmf_init_if2" 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:38.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:38.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:38.221 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:38.479 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:38.479 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:38.480 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:38.480 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:09:38.480 00:09:38.480 --- 10.0.0.3 ping statistics --- 00:09:38.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.480 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:38.480 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:38.480 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:09:38.480 00:09:38.480 --- 10.0.0.4 ping statistics --- 00:09:38.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.480 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:38.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:09:38.480 00:09:38.480 --- 10.0.0.1 ping statistics --- 00:09:38.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.480 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:38.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:09:38.480 00:09:38.480 --- 10.0.0.2 ping statistics --- 00:09:38.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.480 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=75958 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 75958 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 75958 ']' 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.480 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.480 [2024-11-26 16:15:04.055145] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:09:38.480 [2024-11-26 16:15:04.055261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.739 [2024-11-26 16:15:04.211567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.739 [2024-11-26 16:15:04.238095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.739 [2024-11-26 16:15:04.238422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.739 [2024-11-26 16:15:04.238614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.739 [2024-11-26 16:15:04.238736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.739 [2024-11-26 16:15:04.238857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.739 [2024-11-26 16:15:04.239803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.739 [2024-11-26 16:15:04.239937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.739 [2024-11-26 16:15:04.240637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.739 [2024-11-26 16:15:04.240651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.739 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.739 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:38.739 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:38.739 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:38.739 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.739 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.739 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:38.739 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.739 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.739 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.739 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:38.739 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.739 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.998 [2024-11-26 16:15:04.412404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.998 [2024-11-26 16:15:04.423858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.998 Malloc0 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.998 [2024-11-26 16:15:04.473667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=75985 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=75987 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:38.998 { 00:09:38.998 "params": { 00:09:38.998 "name": "Nvme$subsystem", 00:09:38.998 "trtype": "$TEST_TRANSPORT", 00:09:38.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:38.998 "adrfam": "ipv4", 00:09:38.998 "trsvcid": "$NVMF_PORT", 00:09:38.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:38.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:38.998 "hdgst": ${hdgst:-false}, 00:09:38.998 "ddgst": ${ddgst:-false} 00:09:38.998 }, 00:09:38.998 "method": "bdev_nvme_attach_controller" 00:09:38.998 } 00:09:38.998 EOF 00:09:38.998 )") 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=75990 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:38.998 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:38.998 { 00:09:38.998 "params": { 00:09:38.998 "name": "Nvme$subsystem", 00:09:38.998 "trtype": "$TEST_TRANSPORT", 00:09:38.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:38.998 "adrfam": "ipv4", 00:09:38.998 "trsvcid": "$NVMF_PORT", 00:09:38.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:38.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:38.999 "hdgst": ${hdgst:-false}, 00:09:38.999 "ddgst": ${ddgst:-false} 00:09:38.999 }, 00:09:38.999 "method": "bdev_nvme_attach_controller" 00:09:38.999 } 00:09:38.999 EOF 00:09:38.999 )") 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=75992 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:38.999 { 00:09:38.999 "params": { 00:09:38.999 "name": "Nvme$subsystem", 00:09:38.999 "trtype": "$TEST_TRANSPORT", 00:09:38.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:38.999 "adrfam": "ipv4", 00:09:38.999 "trsvcid": "$NVMF_PORT", 00:09:38.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:38.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:38.999 "hdgst": ${hdgst:-false}, 00:09:38.999 "ddgst": ${ddgst:-false} 00:09:38.999 }, 00:09:38.999 "method": "bdev_nvme_attach_controller" 00:09:38.999 } 00:09:38.999 EOF 00:09:38.999 )") 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:38.999 "params": { 00:09:38.999 "name": "Nvme1", 00:09:38.999 "trtype": "tcp", 00:09:38.999 "traddr": "10.0.0.3", 00:09:38.999 "adrfam": "ipv4", 00:09:38.999 "trsvcid": "4420", 00:09:38.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:38.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:38.999 "hdgst": false, 00:09:38.999 "ddgst": false 00:09:38.999 }, 00:09:38.999 "method": "bdev_nvme_attach_controller" 00:09:38.999 }' 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:38.999 { 00:09:38.999 "params": { 00:09:38.999 "name": "Nvme$subsystem", 00:09:38.999 "trtype": "$TEST_TRANSPORT", 00:09:38.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:38.999 "adrfam": "ipv4", 00:09:38.999 "trsvcid": "$NVMF_PORT", 00:09:38.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:38.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:38.999 "hdgst": ${hdgst:-false}, 00:09:38.999 "ddgst": ${ddgst:-false} 00:09:38.999 }, 00:09:38.999 "method": "bdev_nvme_attach_controller" 00:09:38.999 } 00:09:38.999 EOF 00:09:38.999 )") 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:38.999 "params": { 00:09:38.999 "name": "Nvme1", 00:09:38.999 "trtype": "tcp", 00:09:38.999 "traddr": "10.0.0.3", 00:09:38.999 "adrfam": "ipv4", 00:09:38.999 "trsvcid": "4420", 00:09:38.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:38.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:38.999 "hdgst": false, 00:09:38.999 "ddgst": false 00:09:38.999 }, 00:09:38.999 "method": "bdev_nvme_attach_controller" 00:09:38.999 }' 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:38.999 "params": { 00:09:38.999 "name": "Nvme1", 00:09:38.999 "trtype": "tcp", 00:09:38.999 "traddr": "10.0.0.3", 00:09:38.999 "adrfam": "ipv4", 00:09:38.999 "trsvcid": "4420", 00:09:38.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:38.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:38.999 "hdgst": false, 00:09:38.999 "ddgst": false 00:09:38.999 }, 00:09:38.999 "method": "bdev_nvme_attach_controller" 00:09:38.999 }' 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:38.999 "params": { 00:09:38.999 "name": "Nvme1", 00:09:38.999 "trtype": "tcp", 00:09:38.999 "traddr": "10.0.0.3", 00:09:38.999 "adrfam": "ipv4", 00:09:38.999 "trsvcid": "4420", 00:09:38.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:38.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:38.999 "hdgst": false, 00:09:38.999 "ddgst": false 00:09:38.999 }, 00:09:38.999 "method": "bdev_nvme_attach_controller" 00:09:38.999 }' 00:09:38.999 [2024-11-26 16:15:04.536725] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:09:38.999 [2024-11-26 16:15:04.536812] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:38.999 [2024-11-26 16:15:04.542931] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:09:38.999 [2024-11-26 16:15:04.543015] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:38.999 [2024-11-26 16:15:04.569298] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:09:38.999 [2024-11-26 16:15:04.569416] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:38.999 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 75985 00:09:38.999 [2024-11-26 16:15:04.607107] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:09:38.999 [2024-11-26 16:15:04.607207] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:39.258 [2024-11-26 16:15:04.728405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.258 [2024-11-26 16:15:04.745064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:39.258 [2024-11-26 16:15:04.758966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.258 [2024-11-26 16:15:04.768191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.258 [2024-11-26 16:15:04.784839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:39.258 [2024-11-26 16:15:04.798596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.258 [2024-11-26 16:15:04.810827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.258 [2024-11-26 16:15:04.826560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:39.258 [2024-11-26 16:15:04.840396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.258 [2024-11-26 16:15:04.860868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.258 Running I/O for 1 seconds... 00:09:39.258 [2024-11-26 16:15:04.876754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:39.258 [2024-11-26 16:15:04.890692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.258 Running I/O for 1 seconds... 00:09:39.517 Running I/O for 1 seconds... 00:09:39.517 Running I/O for 1 seconds... 00:09:40.452 156928.00 IOPS, 613.00 MiB/s 00:09:40.452 Latency(us) 00:09:40.452 [2024-11-26T16:15:06.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.452 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:40.452 Nvme1n1 : 1.00 156604.94 611.74 0.00 0.00 812.93 398.43 2040.55 00:09:40.452 [2024-11-26T16:15:06.105Z] =================================================================================================================== 00:09:40.452 [2024-11-26T16:15:06.105Z] Total : 156604.94 611.74 0.00 0.00 812.93 398.43 2040.55 00:09:40.452 9976.00 IOPS, 38.97 MiB/s 00:09:40.452 Latency(us) 00:09:40.452 [2024-11-26T16:15:06.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.452 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:40.452 Nvme1n1 : 1.01 10027.39 39.17 0.00 0.00 12709.28 6523.81 20256.58 00:09:40.452 [2024-11-26T16:15:06.105Z] =================================================================================================================== 00:09:40.452 [2024-11-26T16:15:06.105Z] Total : 10027.39 39.17 0.00 0.00 12709.28 6523.81 20256.58 00:09:40.452 7344.00 IOPS, 28.69 MiB/s 00:09:40.452 Latency(us) 00:09:40.452 [2024-11-26T16:15:06.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.452 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:40.452 Nvme1n1 : 1.01 7391.53 28.87 0.00 0.00 17214.14 9532.51 25976.09 00:09:40.452 [2024-11-26T16:15:06.105Z] =================================================================================================================== 00:09:40.452 [2024-11-26T16:15:06.105Z] Total : 7391.53 28.87 0.00 0.00 17214.14 9532.51 25976.09 00:09:40.452 8167.00 IOPS, 31.90 MiB/s [2024-11-26T16:15:06.105Z] 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 75987 00:09:40.452 00:09:40.452 Latency(us) 00:09:40.452 [2024-11-26T16:15:06.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.452 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:40.452 Nvme1n1 : 1.01 8248.44 32.22 0.00 0.00 15451.23 6702.55 25976.09 00:09:40.452 [2024-11-26T16:15:06.105Z] =================================================================================================================== 00:09:40.452 [2024-11-26T16:15:06.105Z] Total : 8248.44 32.22 0.00 0.00 15451.23 6702.55 25976.09 00:09:40.452 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 75990 00:09:40.452 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 75992 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.712 rmmod nvme_tcp 00:09:40.712 rmmod nvme_fabrics 00:09:40.712 rmmod nvme_keyring 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 75958 ']' 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 75958 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 75958 ']' 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 75958 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75958 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.712 killing process with pid 75958 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75958' 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 75958 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 75958 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:40.712 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:40.971 ************************************ 00:09:40.971 END TEST nvmf_bdev_io_wait 00:09:40.971 ************************************ 00:09:40.971 00:09:40.971 real 0m3.279s 00:09:40.971 user 0m12.591s 00:09:40.971 sys 0m2.093s 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.971 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.230 16:15:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:41.230 16:15:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:41.230 16:15:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.230 16:15:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:41.230 ************************************ 00:09:41.230 START TEST nvmf_queue_depth 00:09:41.230 ************************************ 00:09:41.230 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:41.230 * Looking for test storage... 00:09:41.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:41.230 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:41.230 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:41.230 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:41.230 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:41.230 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.230 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:41.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.231 --rc genhtml_branch_coverage=1 00:09:41.231 --rc genhtml_function_coverage=1 00:09:41.231 --rc genhtml_legend=1 00:09:41.231 --rc geninfo_all_blocks=1 00:09:41.231 --rc geninfo_unexecuted_blocks=1 00:09:41.231 00:09:41.231 ' 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:41.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.231 --rc genhtml_branch_coverage=1 00:09:41.231 --rc genhtml_function_coverage=1 00:09:41.231 --rc genhtml_legend=1 00:09:41.231 --rc geninfo_all_blocks=1 00:09:41.231 --rc geninfo_unexecuted_blocks=1 00:09:41.231 00:09:41.231 ' 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:41.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.231 --rc genhtml_branch_coverage=1 00:09:41.231 --rc genhtml_function_coverage=1 00:09:41.231 --rc genhtml_legend=1 00:09:41.231 --rc geninfo_all_blocks=1 00:09:41.231 --rc geninfo_unexecuted_blocks=1 00:09:41.231 00:09:41.231 ' 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:41.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.231 --rc genhtml_branch_coverage=1 00:09:41.231 --rc genhtml_function_coverage=1 00:09:41.231 --rc genhtml_legend=1 00:09:41.231 --rc geninfo_all_blocks=1 00:09:41.231 --rc geninfo_unexecuted_blocks=1 00:09:41.231 00:09:41.231 ' 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:41.231 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:41.491 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:41.491 Cannot find device "nvmf_init_br" 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:41.491 Cannot find device "nvmf_init_br2" 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:41.491 Cannot find device "nvmf_tgt_br" 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:41.491 Cannot find device "nvmf_tgt_br2" 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:41.491 Cannot find device "nvmf_init_br" 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:41.491 Cannot find device "nvmf_init_br2" 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:41.491 Cannot find device "nvmf_tgt_br" 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:41.491 Cannot find device "nvmf_tgt_br2" 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:41.491 Cannot find device "nvmf_br" 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:41.491 16:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:41.491 Cannot find device "nvmf_init_if" 00:09:41.491 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:41.491 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:41.491 Cannot find device "nvmf_init_if2" 00:09:41.491 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:41.491 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:41.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.491 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:41.491 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:41.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.491 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:41.491 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:41.491 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:41.491 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:41.491 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:41.491 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:41.491 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:41.491 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:41.491 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:41.491 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:41.751 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:41.751 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:09:41.751 00:09:41.751 --- 10.0.0.3 ping statistics --- 00:09:41.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.751 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:41.751 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:41.751 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:41.751 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:09:41.751 00:09:41.751 --- 10.0.0.4 ping statistics --- 00:09:41.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.751 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:41.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:41.752 00:09:41.752 --- 10.0.0.1 ping statistics --- 00:09:41.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.752 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:41.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:09:41.752 00:09:41.752 --- 10.0.0.2 ping statistics --- 00:09:41.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.752 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=76245 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 76245 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 76245 ']' 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.752 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:41.752 [2024-11-26 16:15:07.380279] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:09:41.752 [2024-11-26 16:15:07.380446] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.011 [2024-11-26 16:15:07.526780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.011 [2024-11-26 16:15:07.550003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.011 [2024-11-26 16:15:07.550076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.011 [2024-11-26 16:15:07.550090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.011 [2024-11-26 16:15:07.550100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.011 [2024-11-26 16:15:07.550109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.011 [2024-11-26 16:15:07.550517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.011 [2024-11-26 16:15:07.583184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.011 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.011 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:42.011 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:42.011 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:42.011 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:42.271 [2024-11-26 16:15:07.693029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:42.271 Malloc0 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:42.271 [2024-11-26 16:15:07.735856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=76275 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 76275 /var/tmp/bdevperf.sock 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 76275 ']' 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:42.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.271 16:15:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:42.272 [2024-11-26 16:15:07.796404] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:09:42.272 [2024-11-26 16:15:07.796502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76275 ] 00:09:42.531 [2024-11-26 16:15:07.951143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.531 [2024-11-26 16:15:07.976617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.531 [2024-11-26 16:15:08.011021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.531 16:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.531 16:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:42.531 16:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:42.531 16:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.531 16:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:42.531 NVMe0n1 00:09:42.531 16:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.531 16:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:42.791 Running I/O for 10 seconds... 00:09:44.665 7168.00 IOPS, 28.00 MiB/s [2024-11-26T16:15:11.697Z] 7694.00 IOPS, 30.05 MiB/s [2024-11-26T16:15:12.280Z] 7927.00 IOPS, 30.96 MiB/s [2024-11-26T16:15:13.660Z] 8078.25 IOPS, 31.56 MiB/s [2024-11-26T16:15:14.598Z] 8214.00 IOPS, 32.09 MiB/s [2024-11-26T16:15:15.536Z] 8324.83 IOPS, 32.52 MiB/s [2024-11-26T16:15:16.474Z] 8370.57 IOPS, 32.70 MiB/s [2024-11-26T16:15:17.412Z] 8465.75 IOPS, 33.07 MiB/s [2024-11-26T16:15:18.364Z] 8547.44 IOPS, 33.39 MiB/s [2024-11-26T16:15:18.364Z] 8617.20 IOPS, 33.66 MiB/s 00:09:52.711 Latency(us) 00:09:52.711 [2024-11-26T16:15:18.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.711 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:52.711 Verification LBA range: start 0x0 length 0x4000 00:09:52.711 NVMe0n1 : 10.09 8641.94 33.76 0.00 0.00 117978.25 24069.59 87222.46 00:09:52.711 [2024-11-26T16:15:18.364Z] =================================================================================================================== 00:09:52.711 [2024-11-26T16:15:18.364Z] Total : 8641.94 33.76 0.00 0.00 117978.25 24069.59 87222.46 00:09:52.711 { 00:09:52.711 "results": [ 00:09:52.711 { 00:09:52.711 "job": "NVMe0n1", 00:09:52.711 "core_mask": "0x1", 00:09:52.711 "workload": "verify", 00:09:52.711 "status": "finished", 00:09:52.711 "verify_range": { 00:09:52.711 "start": 0, 00:09:52.711 "length": 16384 00:09:52.711 }, 00:09:52.711 "queue_depth": 1024, 00:09:52.711 "io_size": 4096, 00:09:52.711 "runtime": 10.089859, 00:09:52.711 "iops": 8641.94435224516, 00:09:52.711 "mibps": 33.75759512595766, 00:09:52.711 "io_failed": 0, 00:09:52.711 "io_timeout": 0, 00:09:52.711 "avg_latency_us": 117978.25490110055, 00:09:52.711 "min_latency_us": 24069.585454545453, 00:09:52.711 "max_latency_us": 87222.45818181818 00:09:52.711 } 00:09:52.711 ], 00:09:52.711 "core_count": 1 00:09:52.711 } 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 76275 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 76275 ']' 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 76275 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76275 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.970 killing process with pid 76275 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76275' 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 76275 00:09:52.970 Received shutdown signal, test time was about 10.000000 seconds 00:09:52.970 00:09:52.970 Latency(us) 00:09:52.970 [2024-11-26T16:15:18.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.970 [2024-11-26T16:15:18.623Z] =================================================================================================================== 00:09:52.970 [2024-11-26T16:15:18.623Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 76275 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.970 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.970 rmmod nvme_tcp 00:09:52.970 rmmod nvme_fabrics 00:09:53.230 rmmod nvme_keyring 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 76245 ']' 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 76245 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 76245 ']' 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 76245 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76245 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:53.230 killing process with pid 76245 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76245' 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 76245 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 76245 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:53.230 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:53.489 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.489 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:53.489 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:53.489 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:53.489 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:53.489 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:53.489 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:53.489 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:53.489 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.489 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.489 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:53.489 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.489 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.489 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.489 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:53.489 00:09:53.489 real 0m12.415s 00:09:53.489 user 0m21.213s 00:09:53.489 sys 0m2.075s 00:09:53.489 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.489 ************************************ 00:09:53.489 END TEST nvmf_queue_depth 00:09:53.489 ************************************ 00:09:53.489 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:53.489 16:15:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:53.489 16:15:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:53.489 16:15:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.489 16:15:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.489 ************************************ 00:09:53.489 START TEST nvmf_target_multipath 00:09:53.489 ************************************ 00:09:53.489 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:53.768 * Looking for test storage... 00:09:53.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:53.769 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.770 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.770 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:53.770 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:53.770 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.770 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:53.770 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.770 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:53.770 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:53.770 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.770 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:53.770 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.770 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.770 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.770 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:53.770 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.770 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:53.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.770 --rc genhtml_branch_coverage=1 00:09:53.770 --rc genhtml_function_coverage=1 00:09:53.770 --rc genhtml_legend=1 00:09:53.770 --rc geninfo_all_blocks=1 00:09:53.770 --rc geninfo_unexecuted_blocks=1 00:09:53.770 00:09:53.771 ' 00:09:53.771 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:53.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.771 --rc genhtml_branch_coverage=1 00:09:53.771 --rc genhtml_function_coverage=1 00:09:53.771 --rc genhtml_legend=1 00:09:53.771 --rc geninfo_all_blocks=1 00:09:53.771 --rc geninfo_unexecuted_blocks=1 00:09:53.771 00:09:53.771 ' 00:09:53.771 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:53.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.771 --rc genhtml_branch_coverage=1 00:09:53.771 --rc genhtml_function_coverage=1 00:09:53.771 --rc genhtml_legend=1 00:09:53.771 --rc geninfo_all_blocks=1 00:09:53.771 --rc geninfo_unexecuted_blocks=1 00:09:53.771 00:09:53.771 ' 00:09:53.771 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:53.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.771 --rc genhtml_branch_coverage=1 00:09:53.771 --rc genhtml_function_coverage=1 00:09:53.771 --rc genhtml_legend=1 00:09:53.771 --rc geninfo_all_blocks=1 00:09:53.771 --rc geninfo_unexecuted_blocks=1 00:09:53.771 00:09:53.771 ' 00:09:53.771 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:53.771 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:53.771 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.771 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.771 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.771 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.771 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.771 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.771 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.771 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.772 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.772 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.772 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:09:53.772 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:09:53.772 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.772 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.772 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:53.772 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.772 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.772 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.772 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.772 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.772 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.772 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.773 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.773 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.773 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:53.773 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.773 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:53.773 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.773 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.773 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.773 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.773 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.773 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.773 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.773 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.773 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.773 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.773 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.773 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:53.774 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:53.774 Cannot find device "nvmf_init_br" 00:09:53.775 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:53.775 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:53.775 Cannot find device "nvmf_init_br2" 00:09:53.775 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:53.775 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:53.775 Cannot find device "nvmf_tgt_br" 00:09:53.775 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:53.775 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.775 Cannot find device "nvmf_tgt_br2" 00:09:53.775 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:53.775 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:54.046 Cannot find device "nvmf_init_br" 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:54.046 Cannot find device "nvmf_init_br2" 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:54.046 Cannot find device "nvmf_tgt_br" 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:54.046 Cannot find device "nvmf_tgt_br2" 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:54.046 Cannot find device "nvmf_br" 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:54.046 Cannot find device "nvmf_init_if" 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:54.046 Cannot find device "nvmf_init_if2" 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:54.046 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:54.046 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:54.046 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:54.305 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:54.305 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:09:54.305 00:09:54.305 --- 10.0.0.3 ping statistics --- 00:09:54.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.305 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:54.305 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:54.305 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.103 ms 00:09:54.305 00:09:54.305 --- 10.0.0.4 ping statistics --- 00:09:54.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.305 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:54.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:54.305 00:09:54.305 --- 10.0.0.1 ping statistics --- 00:09:54.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.305 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:54.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:09:54.305 00:09:54.305 --- 10.0.0.2 ping statistics --- 00:09:54.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.305 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=76635 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 76635 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 76635 ']' 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.305 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:54.305 [2024-11-26 16:15:19.859063] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:09:54.305 [2024-11-26 16:15:19.859167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.566 [2024-11-26 16:15:20.013542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.566 [2024-11-26 16:15:20.039181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.566 [2024-11-26 16:15:20.039241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.566 [2024-11-26 16:15:20.039255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.566 [2024-11-26 16:15:20.039265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.566 [2024-11-26 16:15:20.039273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.566 [2024-11-26 16:15:20.040179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.566 [2024-11-26 16:15:20.040321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.566 [2024-11-26 16:15:20.040448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.566 [2024-11-26 16:15:20.040449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.566 [2024-11-26 16:15:20.076432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:54.566 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.566 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:09:54.566 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:54.566 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.566 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:54.566 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.566 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:54.824 [2024-11-26 16:15:20.453980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.081 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:55.340 Malloc0 00:09:55.340 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:55.598 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.856 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:56.114 [2024-11-26 16:15:21.588822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:56.114 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:56.372 [2024-11-26 16:15:21.837049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:56.372 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:56.372 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:56.631 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:56.631 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:09:56.631 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:56.631 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:56.631 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=76723 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:58.533 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:58.533 [global] 00:09:58.533 thread=1 00:09:58.533 invalidate=1 00:09:58.533 rw=randrw 00:09:58.533 time_based=1 00:09:58.533 runtime=6 00:09:58.533 ioengine=libaio 00:09:58.533 direct=1 00:09:58.533 bs=4096 00:09:58.533 iodepth=128 00:09:58.533 norandommap=0 00:09:58.533 numjobs=1 00:09:58.533 00:09:58.533 verify_dump=1 00:09:58.533 verify_backlog=512 00:09:58.533 verify_state_save=0 00:09:58.533 do_verify=1 00:09:58.533 verify=crc32c-intel 00:09:58.791 [job0] 00:09:58.791 filename=/dev/nvme0n1 00:09:58.791 Could not set queue depth (nvme0n1) 00:09:58.791 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.791 fio-3.35 00:09:58.791 Starting 1 thread 00:09:59.768 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:00.027 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:00.285 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:00.285 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:00.285 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:00.285 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:00.285 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:00.285 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:00.285 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:00.285 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:00.285 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:00.285 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:00.285 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:00.285 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:00.285 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:00.543 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:00.802 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:00.802 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:00.802 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:00.802 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:00.802 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:00.802 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:00.802 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:00.802 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:00.802 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:00.802 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:00.802 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:00.802 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:00.802 16:15:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 76723 00:10:04.988 00:10:04.988 job0: (groupid=0, jobs=1): err= 0: pid=76744: Tue Nov 26 16:15:30 2024 00:10:04.988 read: IOPS=10.3k, BW=40.3MiB/s (42.3MB/s)(242MiB/6003msec) 00:10:04.988 slat (usec): min=4, max=6136, avg=56.74, stdev=229.48 00:10:04.988 clat (usec): min=1088, max=15897, avg=8403.90, stdev=1486.71 00:10:04.988 lat (usec): min=1107, max=15909, avg=8460.63, stdev=1491.10 00:10:04.988 clat percentiles (usec): 00:10:04.988 | 1.00th=[ 4359], 5.00th=[ 6259], 10.00th=[ 7111], 20.00th=[ 7635], 00:10:04.988 | 30.00th=[ 7898], 40.00th=[ 8094], 50.00th=[ 8291], 60.00th=[ 8455], 00:10:04.988 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[11863], 00:10:04.988 | 99.00th=[13042], 99.50th=[13304], 99.90th=[14222], 99.95th=[15008], 00:10:04.988 | 99.99th=[15926] 00:10:04.988 bw ( KiB/s): min=14592, max=29200, per=52.86%, avg=21813.82, stdev=4507.69, samples=11 00:10:04.988 iops : min= 3648, max= 7300, avg=5453.45, stdev=1126.92, samples=11 00:10:04.988 write: IOPS=5985, BW=23.4MiB/s (24.5MB/s)(129MiB/5519msec); 0 zone resets 00:10:04.988 slat (usec): min=14, max=3219, avg=65.75, stdev=163.40 00:10:04.988 clat (usec): min=1056, max=13902, avg=7295.46, stdev=1293.39 00:10:04.988 lat (usec): min=1077, max=13925, avg=7361.20, stdev=1297.48 00:10:04.988 clat percentiles (usec): 00:10:04.988 | 1.00th=[ 3294], 5.00th=[ 4359], 10.00th=[ 5473], 20.00th=[ 6783], 00:10:04.988 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7504], 60.00th=[ 7701], 00:10:04.988 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:10:04.988 | 99.00th=[11076], 99.50th=[11600], 99.90th=[12780], 99.95th=[13304], 00:10:04.988 | 99.99th=[13829] 00:10:04.988 bw ( KiB/s): min=15040, max=28672, per=91.01%, avg=21792.00, stdev=4243.97, samples=11 00:10:04.988 iops : min= 3760, max= 7168, avg=5448.00, stdev=1060.99, samples=11 00:10:04.988 lat (msec) : 2=0.04%, 4=1.47%, 10=92.36%, 20=6.13% 00:10:04.988 cpu : usr=6.00%, sys=20.29%, ctx=5389, majf=0, minf=66 00:10:04.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:04.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.988 issued rwts: total=61927,33036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.988 00:10:04.988 Run status group 0 (all jobs): 00:10:04.988 READ: bw=40.3MiB/s (42.3MB/s), 40.3MiB/s-40.3MiB/s (42.3MB/s-42.3MB/s), io=242MiB (254MB), run=6003-6003msec 00:10:04.988 WRITE: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=129MiB (135MB), run=5519-5519msec 00:10:04.988 00:10:04.988 Disk stats (read/write): 00:10:04.988 nvme0n1: ios=60950/32524, merge=0/0, ticks=492246/223486, in_queue=715732, util=98.58% 00:10:04.988 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:05.246 16:15:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:05.506 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:05.506 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:05.506 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:05.506 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:05.506 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:05.506 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:05.506 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:05.506 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:05.506 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:05.506 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:05.506 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:05.506 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:05.506 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:05.506 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=76824 00:10:05.506 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:05.506 16:15:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:05.506 [global] 00:10:05.506 thread=1 00:10:05.506 invalidate=1 00:10:05.506 rw=randrw 00:10:05.506 time_based=1 00:10:05.506 runtime=6 00:10:05.506 ioengine=libaio 00:10:05.506 direct=1 00:10:05.506 bs=4096 00:10:05.506 iodepth=128 00:10:05.506 norandommap=0 00:10:05.506 numjobs=1 00:10:05.506 00:10:05.506 verify_dump=1 00:10:05.506 verify_backlog=512 00:10:05.506 verify_state_save=0 00:10:05.506 do_verify=1 00:10:05.506 verify=crc32c-intel 00:10:05.506 [job0] 00:10:05.506 filename=/dev/nvme0n1 00:10:05.506 Could not set queue depth (nvme0n1) 00:10:05.506 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.506 fio-3.35 00:10:05.506 Starting 1 thread 00:10:06.441 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:06.700 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:06.959 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:06.959 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:06.959 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:06.959 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:06.959 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:06.959 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:06.959 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:06.959 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:06.959 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:06.959 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:06.959 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:06.959 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:06.959 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:07.218 16:15:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:07.476 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:07.476 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:07.476 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:07.476 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:07.476 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:07.476 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:07.476 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:07.476 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:07.476 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:07.476 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:07.476 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:07.476 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:07.476 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 76824 00:10:11.662 00:10:11.662 job0: (groupid=0, jobs=1): err= 0: pid=76847: Tue Nov 26 16:15:37 2024 00:10:11.662 read: IOPS=11.2k, BW=43.7MiB/s (45.8MB/s)(263MiB/6007msec) 00:10:11.662 slat (usec): min=4, max=6439, avg=42.68, stdev=193.29 00:10:11.662 clat (usec): min=278, max=16856, avg=7717.77, stdev=2048.73 00:10:11.662 lat (usec): min=300, max=16873, avg=7760.45, stdev=2063.86 00:10:11.662 clat percentiles (usec): 00:10:11.662 | 1.00th=[ 2606], 5.00th=[ 3916], 10.00th=[ 4817], 20.00th=[ 5997], 00:10:11.662 | 30.00th=[ 7177], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8356], 00:10:11.662 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[11338], 00:10:11.662 | 99.00th=[12911], 99.50th=[13304], 99.90th=[14353], 99.95th=[14615], 00:10:11.662 | 99.99th=[15533] 00:10:11.662 bw ( KiB/s): min=11640, max=37504, per=54.23%, avg=24277.82, stdev=7691.94, samples=11 00:10:11.662 iops : min= 2910, max= 9376, avg=6069.45, stdev=1922.99, samples=11 00:10:11.662 write: IOPS=6676, BW=26.1MiB/s (27.3MB/s)(142MiB/5459msec); 0 zone resets 00:10:11.662 slat (usec): min=14, max=3517, avg=57.58, stdev=147.26 00:10:11.662 clat (usec): min=737, max=15107, avg=6686.27, stdev=1812.87 00:10:11.662 lat (usec): min=759, max=15161, avg=6743.85, stdev=1828.26 00:10:11.662 clat percentiles (usec): 00:10:11.662 | 1.00th=[ 2737], 5.00th=[ 3425], 10.00th=[ 3916], 20.00th=[ 4686], 00:10:11.662 | 30.00th=[ 5800], 40.00th=[ 6915], 50.00th=[ 7308], 60.00th=[ 7570], 00:10:11.662 | 70.00th=[ 7767], 80.00th=[ 8029], 90.00th=[ 8455], 95.00th=[ 8717], 00:10:11.662 | 99.00th=[11207], 99.50th=[11731], 99.90th=[13042], 99.95th=[13304], 00:10:11.662 | 99.99th=[14353] 00:10:11.662 bw ( KiB/s): min=12304, max=38160, per=90.93%, avg=24282.18, stdev=7533.56, samples=11 00:10:11.662 iops : min= 3076, max= 9540, avg=6070.55, stdev=1883.39, samples=11 00:10:11.662 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.05% 00:10:11.662 lat (msec) : 2=0.29%, 4=7.01%, 10=87.14%, 20=5.47% 00:10:11.662 cpu : usr=6.03%, sys=22.19%, ctx=5835, majf=0, minf=92 00:10:11.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:11.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.662 issued rwts: total=67223,36445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.662 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.662 00:10:11.662 Run status group 0 (all jobs): 00:10:11.662 READ: bw=43.7MiB/s (45.8MB/s), 43.7MiB/s-43.7MiB/s (45.8MB/s-45.8MB/s), io=263MiB (275MB), run=6007-6007msec 00:10:11.662 WRITE: bw=26.1MiB/s (27.3MB/s), 26.1MiB/s-26.1MiB/s (27.3MB/s-27.3MB/s), io=142MiB (149MB), run=5459-5459msec 00:10:11.662 00:10:11.662 Disk stats (read/write): 00:10:11.662 nvme0n1: ios=66321/35807, merge=0/0, ticks=489511/223999, in_queue=713510, util=98.68% 00:10:11.662 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:11.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:11.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:11.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:10:11.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:11.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:11.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:10:11.921 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:12.179 rmmod nvme_tcp 00:10:12.179 rmmod nvme_fabrics 00:10:12.179 rmmod nvme_keyring 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 76635 ']' 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 76635 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 76635 ']' 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 76635 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.179 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76635 00:10:12.180 killing process with pid 76635 00:10:12.180 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.180 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.180 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76635' 00:10:12.180 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 76635 00:10:12.180 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 76635 00:10:12.439 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:12.439 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:12.439 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:12.439 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:12.439 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:12.439 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:12.439 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:12.439 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:12.439 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:12.439 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:12.439 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:12.439 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:12.439 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:12.439 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:12.439 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:12.439 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:12.439 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:12.439 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:12.439 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:12.439 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:12.698 00:10:12.698 real 0m19.064s 00:10:12.698 user 1m10.114s 00:10:12.698 sys 0m9.784s 00:10:12.698 ************************************ 00:10:12.698 END TEST nvmf_target_multipath 00:10:12.698 ************************************ 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:12.698 ************************************ 00:10:12.698 START TEST nvmf_zcopy 00:10:12.698 ************************************ 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:12.698 * Looking for test storage... 00:10:12.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:12.698 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:12.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.958 --rc genhtml_branch_coverage=1 00:10:12.958 --rc genhtml_function_coverage=1 00:10:12.958 --rc genhtml_legend=1 00:10:12.958 --rc geninfo_all_blocks=1 00:10:12.958 --rc geninfo_unexecuted_blocks=1 00:10:12.958 00:10:12.958 ' 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:12.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.958 --rc genhtml_branch_coverage=1 00:10:12.958 --rc genhtml_function_coverage=1 00:10:12.958 --rc genhtml_legend=1 00:10:12.958 --rc geninfo_all_blocks=1 00:10:12.958 --rc geninfo_unexecuted_blocks=1 00:10:12.958 00:10:12.958 ' 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:12.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.958 --rc genhtml_branch_coverage=1 00:10:12.958 --rc genhtml_function_coverage=1 00:10:12.958 --rc genhtml_legend=1 00:10:12.958 --rc geninfo_all_blocks=1 00:10:12.958 --rc geninfo_unexecuted_blocks=1 00:10:12.958 00:10:12.958 ' 00:10:12.958 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:12.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.958 --rc genhtml_branch_coverage=1 00:10:12.958 --rc genhtml_function_coverage=1 00:10:12.958 --rc genhtml_legend=1 00:10:12.958 --rc geninfo_all_blocks=1 00:10:12.959 --rc geninfo_unexecuted_blocks=1 00:10:12.959 00:10:12.959 ' 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.959 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:12.959 Cannot find device "nvmf_init_br" 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:12.959 Cannot find device "nvmf_init_br2" 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:12.959 Cannot find device "nvmf_tgt_br" 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:12.959 Cannot find device "nvmf_tgt_br2" 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:12.959 Cannot find device "nvmf_init_br" 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:12.959 Cannot find device "nvmf_init_br2" 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:12.959 Cannot find device "nvmf_tgt_br" 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:12.959 Cannot find device "nvmf_tgt_br2" 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:12.959 Cannot find device "nvmf_br" 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:12.959 Cannot find device "nvmf_init_if" 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:12.959 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:12.960 Cannot find device "nvmf_init_if2" 00:10:12.960 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:12.960 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:12.960 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.960 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:12.960 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:12.960 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.960 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:12.960 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:12.960 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:13.219 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:13.220 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:13.220 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:10:13.220 00:10:13.220 --- 10.0.0.3 ping statistics --- 00:10:13.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.220 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:13.220 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:13.220 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:10:13.220 00:10:13.220 --- 10.0.0.4 ping statistics --- 00:10:13.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.220 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:13.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:10:13.220 00:10:13.220 --- 10.0.0.1 ping statistics --- 00:10:13.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.220 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:13.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:10:13.220 00:10:13.220 --- 10.0.0.2 ping statistics --- 00:10:13.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.220 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:13.220 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:13.479 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:13.479 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:13.479 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:13.479 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.479 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=77149 00:10:13.479 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 77149 00:10:13.479 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 77149 ']' 00:10:13.479 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.479 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.479 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.479 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.479 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:13.479 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.479 [2024-11-26 16:15:38.919570] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:10:13.479 [2024-11-26 16:15:38.919658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.479 [2024-11-26 16:15:39.061359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.479 [2024-11-26 16:15:39.080197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.479 [2024-11-26 16:15:39.080270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.479 [2024-11-26 16:15:39.080297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.479 [2024-11-26 16:15:39.080305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.479 [2024-11-26 16:15:39.080312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.479 [2024-11-26 16:15:39.080677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.479 [2024-11-26 16:15:39.109299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.736 [2024-11-26 16:15:39.237703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.736 [2024-11-26 16:15:39.253850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.736 malloc0 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:13.736 { 00:10:13.736 "params": { 00:10:13.736 "name": "Nvme$subsystem", 00:10:13.736 "trtype": "$TEST_TRANSPORT", 00:10:13.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:13.736 "adrfam": "ipv4", 00:10:13.736 "trsvcid": "$NVMF_PORT", 00:10:13.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:13.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:13.736 "hdgst": ${hdgst:-false}, 00:10:13.736 "ddgst": ${ddgst:-false} 00:10:13.736 }, 00:10:13.736 "method": "bdev_nvme_attach_controller" 00:10:13.736 } 00:10:13.736 EOF 00:10:13.736 )") 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:13.736 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:13.736 "params": { 00:10:13.736 "name": "Nvme1", 00:10:13.736 "trtype": "tcp", 00:10:13.736 "traddr": "10.0.0.3", 00:10:13.736 "adrfam": "ipv4", 00:10:13.736 "trsvcid": "4420", 00:10:13.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:13.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:13.736 "hdgst": false, 00:10:13.736 "ddgst": false 00:10:13.736 }, 00:10:13.736 "method": "bdev_nvme_attach_controller" 00:10:13.736 }' 00:10:13.736 [2024-11-26 16:15:39.341882] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:10:13.737 [2024-11-26 16:15:39.341998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77169 ] 00:10:13.994 [2024-11-26 16:15:39.495769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.994 [2024-11-26 16:15:39.520595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.994 [2024-11-26 16:15:39.563034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:14.252 Running I/O for 10 seconds... 00:10:16.148 5890.00 IOPS, 46.02 MiB/s [2024-11-26T16:15:42.739Z] 6083.00 IOPS, 47.52 MiB/s [2024-11-26T16:15:43.676Z] 6239.00 IOPS, 48.74 MiB/s [2024-11-26T16:15:45.055Z] 6293.00 IOPS, 49.16 MiB/s [2024-11-26T16:15:45.991Z] 6332.80 IOPS, 49.48 MiB/s [2024-11-26T16:15:46.927Z] 6338.00 IOPS, 49.52 MiB/s [2024-11-26T16:15:47.863Z] 6362.29 IOPS, 49.71 MiB/s [2024-11-26T16:15:48.799Z] 6376.62 IOPS, 49.82 MiB/s [2024-11-26T16:15:49.733Z] 6399.78 IOPS, 50.00 MiB/s [2024-11-26T16:15:49.733Z] 6410.10 IOPS, 50.08 MiB/s 00:10:24.080 Latency(us) 00:10:24.080 [2024-11-26T16:15:49.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.080 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:24.080 Verification LBA range: start 0x0 length 0x1000 00:10:24.080 Nvme1n1 : 10.01 6413.09 50.10 0.00 0.00 19896.67 1370.30 33602.09 00:10:24.080 [2024-11-26T16:15:49.733Z] =================================================================================================================== 00:10:24.080 [2024-11-26T16:15:49.733Z] Total : 6413.09 50.10 0.00 0.00 19896.67 1370.30 33602.09 00:10:24.339 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:24.339 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=77292 00:10:24.339 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:24.339 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:24.339 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:24.339 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.339 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:24.339 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:24.339 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:24.339 { 00:10:24.339 "params": { 00:10:24.339 "name": "Nvme$subsystem", 00:10:24.339 "trtype": "$TEST_TRANSPORT", 00:10:24.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:24.339 "adrfam": "ipv4", 00:10:24.339 "trsvcid": "$NVMF_PORT", 00:10:24.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:24.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:24.339 "hdgst": ${hdgst:-false}, 00:10:24.339 "ddgst": ${ddgst:-false} 00:10:24.339 }, 00:10:24.339 "method": "bdev_nvme_attach_controller" 00:10:24.339 } 00:10:24.339 EOF 00:10:24.339 )") 00:10:24.339 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:24.339 [2024-11-26 16:15:49.805098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.339 [2024-11-26 16:15:49.805184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.339 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:24.339 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:24.339 16:15:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:24.339 "params": { 00:10:24.339 "name": "Nvme1", 00:10:24.339 "trtype": "tcp", 00:10:24.339 "traddr": "10.0.0.3", 00:10:24.339 "adrfam": "ipv4", 00:10:24.339 "trsvcid": "4420", 00:10:24.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:24.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:24.339 "hdgst": false, 00:10:24.339 "ddgst": false 00:10:24.339 }, 00:10:24.339 "method": "bdev_nvme_attach_controller" 00:10:24.339 }' 00:10:24.339 [2024-11-26 16:15:49.817036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.339 [2024-11-26 16:15:49.817083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.339 [2024-11-26 16:15:49.829033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.339 [2024-11-26 16:15:49.829078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.339 [2024-11-26 16:15:49.841051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.339 [2024-11-26 16:15:49.841105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.339 [2024-11-26 16:15:49.843242] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:10:24.339 [2024-11-26 16:15:49.843346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77292 ] 00:10:24.339 [2024-11-26 16:15:49.853051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.340 [2024-11-26 16:15:49.853099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.340 [2024-11-26 16:15:49.865057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.340 [2024-11-26 16:15:49.865095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.340 [2024-11-26 16:15:49.877049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.340 [2024-11-26 16:15:49.877098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.340 [2024-11-26 16:15:49.885055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.340 [2024-11-26 16:15:49.885093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.340 [2024-11-26 16:15:49.897047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.340 [2024-11-26 16:15:49.897093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.340 [2024-11-26 16:15:49.909053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.340 [2024-11-26 16:15:49.909102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.340 [2024-11-26 16:15:49.921050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.340 [2024-11-26 16:15:49.921099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.340 [2024-11-26 16:15:49.933035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.340 [2024-11-26 16:15:49.933075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.340 [2024-11-26 16:15:49.945065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.340 [2024-11-26 16:15:49.945132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.340 [2024-11-26 16:15:49.957068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.340 [2024-11-26 16:15:49.957119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.340 [2024-11-26 16:15:49.969068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.340 [2024-11-26 16:15:49.969117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.340 [2024-11-26 16:15:49.981121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.340 [2024-11-26 16:15:49.981171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:49.986297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.600 [2024-11-26 16:15:49.993087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:49.993144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.005100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.005163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.008012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.600 [2024-11-26 16:15:50.017175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.017237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.029187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.029265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.041135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.041206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.046956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:24.600 [2024-11-26 16:15:50.053116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.053181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.065105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.065181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.077110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.077162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.089116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.089167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.101137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.101193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.113140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.113191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.125198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.125266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.137187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.137246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 Running I/O for 5 seconds... 00:10:24.600 [2024-11-26 16:15:50.149223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.149280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.165915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.165979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.181913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.181979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.199332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.199426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.214532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.214569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.600 [2024-11-26 16:15:50.230615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.600 [2024-11-26 16:15:50.230651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.247678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.247756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.263162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.263217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.272234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.272267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.288498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.288537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.305471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.305521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.321096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.321147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.331989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.332038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.347978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.348028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.365090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.365141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.381081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.381162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.399008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.399048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.413922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.413984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.429996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.430073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.445694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.445796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.461662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.461719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.477598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.477663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.860 [2024-11-26 16:15:50.496263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.860 [2024-11-26 16:15:50.496373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.512278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.512374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.528492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.528570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.544509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.544576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.554213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.554279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.569469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.569531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.579075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.579133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.594601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.594650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.609966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.610030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.625952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.626016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.642661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.642706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.658115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.658165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.667267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.667316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.682789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.682853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.699413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.699474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.713783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.713821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.730130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.730211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.747403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.747451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.120 [2024-11-26 16:15:50.764290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.120 [2024-11-26 16:15:50.764377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.380 [2024-11-26 16:15:50.779604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.380 [2024-11-26 16:15:50.779666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.380 [2024-11-26 16:15:50.789296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.380 [2024-11-26 16:15:50.789358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.380 [2024-11-26 16:15:50.804182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.380 [2024-11-26 16:15:50.804234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.380 [2024-11-26 16:15:50.819401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.380 [2024-11-26 16:15:50.819448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.380 [2024-11-26 16:15:50.828898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.380 [2024-11-26 16:15:50.828948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.380 [2024-11-26 16:15:50.844262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.380 [2024-11-26 16:15:50.844311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.380 [2024-11-26 16:15:50.859996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.380 [2024-11-26 16:15:50.860044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.380 [2024-11-26 16:15:50.876490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.380 [2024-11-26 16:15:50.876543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.380 [2024-11-26 16:15:50.892990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.380 [2024-11-26 16:15:50.893037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.380 [2024-11-26 16:15:50.909444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.380 [2024-11-26 16:15:50.909477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.380 [2024-11-26 16:15:50.925820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.381 [2024-11-26 16:15:50.925867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.381 [2024-11-26 16:15:50.943229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.381 [2024-11-26 16:15:50.943278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.381 [2024-11-26 16:15:50.959323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.381 [2024-11-26 16:15:50.959398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.381 [2024-11-26 16:15:50.977947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.381 [2024-11-26 16:15:50.978019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.381 [2024-11-26 16:15:50.992737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.381 [2024-11-26 16:15:50.992820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.381 [2024-11-26 16:15:51.008390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.381 [2024-11-26 16:15:51.008476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.381 [2024-11-26 16:15:51.026207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.381 [2024-11-26 16:15:51.026273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.640 [2024-11-26 16:15:51.040278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.640 [2024-11-26 16:15:51.040398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.640 [2024-11-26 16:15:51.056296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.640 [2024-11-26 16:15:51.056412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.640 [2024-11-26 16:15:51.073011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.640 [2024-11-26 16:15:51.073082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.640 [2024-11-26 16:15:51.090588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.640 [2024-11-26 16:15:51.090649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.640 [2024-11-26 16:15:51.107110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.640 [2024-11-26 16:15:51.107182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.640 [2024-11-26 16:15:51.122639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.640 [2024-11-26 16:15:51.122701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.640 [2024-11-26 16:15:51.140898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.640 [2024-11-26 16:15:51.140948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.640 12036.00 IOPS, 94.03 MiB/s [2024-11-26T16:15:51.293Z] [2024-11-26 16:15:51.156548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.640 [2024-11-26 16:15:51.156586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.640 [2024-11-26 16:15:51.173945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.640 [2024-11-26 16:15:51.173998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.640 [2024-11-26 16:15:51.189969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.640 [2024-11-26 16:15:51.190028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.640 [2024-11-26 16:15:51.206386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.640 [2024-11-26 16:15:51.206433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.640 [2024-11-26 16:15:51.224676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.640 [2024-11-26 16:15:51.224755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.640 [2024-11-26 16:15:51.239416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.640 [2024-11-26 16:15:51.239467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.640 [2024-11-26 16:15:51.248976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.640 [2024-11-26 16:15:51.249026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.640 [2024-11-26 16:15:51.265325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.640 [2024-11-26 16:15:51.265408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.640 [2024-11-26 16:15:51.280735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.640 [2024-11-26 16:15:51.280799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.291487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.291537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.305970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.306020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.316618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.316657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.331538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.331575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.347824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.347873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.365451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.365501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.379962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.380010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.396331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.396404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.411335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.411418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.427282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.427366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.444831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.444884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.460206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.460280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.469558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.469596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.484394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.484437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.499983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.500040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.516932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.516992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.900 [2024-11-26 16:15:51.532289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.900 [2024-11-26 16:15:51.532397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.547977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.160 [2024-11-26 16:15:51.548064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.564086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.160 [2024-11-26 16:15:51.564142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.583654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.160 [2024-11-26 16:15:51.583714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.599115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.160 [2024-11-26 16:15:51.599171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.617496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.160 [2024-11-26 16:15:51.617539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.633144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.160 [2024-11-26 16:15:51.633206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.650761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.160 [2024-11-26 16:15:51.650812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.666617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.160 [2024-11-26 16:15:51.666651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.683970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.160 [2024-11-26 16:15:51.684033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.699347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.160 [2024-11-26 16:15:51.699418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.708299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.160 [2024-11-26 16:15:51.708403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.723658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.160 [2024-11-26 16:15:51.723709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.738635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.160 [2024-11-26 16:15:51.738669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.747774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.160 [2024-11-26 16:15:51.747821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.762695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.160 [2024-11-26 16:15:51.762758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.777771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.160 [2024-11-26 16:15:51.777819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.160 [2024-11-26 16:15:51.788106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.161 [2024-11-26 16:15:51.788157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.161 [2024-11-26 16:15:51.804023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.161 [2024-11-26 16:15:51.804062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:51.818261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:51.818311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:51.835546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:51.835579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:51.850929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:51.850978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:51.860067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:51.860115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:51.874951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:51.875002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:51.890808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:51.890856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:51.908315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:51.908397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:51.923441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:51.923475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:51.935315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:51.935379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:51.950733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:51.950783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:51.968592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:51.968629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:51.985128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:51.985176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:52.001215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:52.001265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:52.019861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:52.019910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:52.034837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:52.034887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:52.046343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:52.046399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.420 [2024-11-26 16:15:52.062862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.420 [2024-11-26 16:15:52.062914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-11-26 16:15:52.077634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.077667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-11-26 16:15:52.094260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.094310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-11-26 16:15:52.110478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.110512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-11-26 16:15:52.128284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.128368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-11-26 16:15:52.142027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.142076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 12125.50 IOPS, 94.73 MiB/s [2024-11-26T16:15:52.333Z] [2024-11-26 16:15:52.156708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.156786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-11-26 16:15:52.173140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.173189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-11-26 16:15:52.188298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.188381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-11-26 16:15:52.203461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.203495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-11-26 16:15:52.221213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.221262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-11-26 16:15:52.236279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.236328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-11-26 16:15:52.245609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.245642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-11-26 16:15:52.261803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.261865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-11-26 16:15:52.272814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.272861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-11-26 16:15:52.289413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.289454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-11-26 16:15:52.305358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.305435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-11-26 16:15:52.323314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-11-26 16:15:52.323402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-11-26 16:15:52.338063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-11-26 16:15:52.338113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-11-26 16:15:52.350239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-11-26 16:15:52.350287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-11-26 16:15:52.366636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-11-26 16:15:52.366671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-11-26 16:15:52.384266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-11-26 16:15:52.384318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-11-26 16:15:52.399213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.940 [2024-11-26 16:15:52.399263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.940 [2024-11-26 16:15:52.414991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.940 [2024-11-26 16:15:52.415042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.940 [2024-11-26 16:15:52.434446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.940 [2024-11-26 16:15:52.434495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.940 [2024-11-26 16:15:52.449247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.940 [2024-11-26 16:15:52.449297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.940 [2024-11-26 16:15:52.461230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.940 [2024-11-26 16:15:52.461294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.940 [2024-11-26 16:15:52.478048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.940 [2024-11-26 16:15:52.478097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.940 [2024-11-26 16:15:52.493934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.940 [2024-11-26 16:15:52.493983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.940 [2024-11-26 16:15:52.509909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.940 [2024-11-26 16:15:52.509957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.940 [2024-11-26 16:15:52.521008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.940 [2024-11-26 16:15:52.521058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.940 [2024-11-26 16:15:52.537351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.940 [2024-11-26 16:15:52.537410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.940 [2024-11-26 16:15:52.553148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.940 [2024-11-26 16:15:52.553196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.940 [2024-11-26 16:15:52.564438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.940 [2024-11-26 16:15:52.564473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.940 [2024-11-26 16:15:52.580369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.940 [2024-11-26 16:15:52.580419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.596921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.596969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.615591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.615640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.630756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.630805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.641080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.641131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.657211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.657276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.673201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.673252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.690925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.690974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.705713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.705763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.722524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.722558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.736426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.736468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.751537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.751573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.763133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.763183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.779627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.779676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.795077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.795125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.805048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.805097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.820678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.820729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.199 [2024-11-26 16:15:52.830503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.199 [2024-11-26 16:15:52.830539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.458 [2024-11-26 16:15:52.846914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.458 [2024-11-26 16:15:52.846952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.458 [2024-11-26 16:15:52.862957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.458 [2024-11-26 16:15:52.863007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.458 [2024-11-26 16:15:52.881400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.458 [2024-11-26 16:15:52.881459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.458 [2024-11-26 16:15:52.895476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.458 [2024-11-26 16:15:52.895509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.458 [2024-11-26 16:15:52.911940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.458 [2024-11-26 16:15:52.911990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.458 [2024-11-26 16:15:52.927571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.458 [2024-11-26 16:15:52.927603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.458 [2024-11-26 16:15:52.936493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.458 [2024-11-26 16:15:52.936528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.458 [2024-11-26 16:15:52.951722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.459 [2024-11-26 16:15:52.951773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.459 [2024-11-26 16:15:52.967256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.459 [2024-11-26 16:15:52.967306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.459 [2024-11-26 16:15:52.977203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.459 [2024-11-26 16:15:52.977251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.459 [2024-11-26 16:15:52.991846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.459 [2024-11-26 16:15:52.991894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.459 [2024-11-26 16:15:53.007514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.459 [2024-11-26 16:15:53.007548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.459 [2024-11-26 16:15:53.026055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.459 [2024-11-26 16:15:53.026104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.459 [2024-11-26 16:15:53.041396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.459 [2024-11-26 16:15:53.041439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.459 [2024-11-26 16:15:53.050986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.459 [2024-11-26 16:15:53.051035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.459 [2024-11-26 16:15:53.066168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.459 [2024-11-26 16:15:53.066217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.459 [2024-11-26 16:15:53.082337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.459 [2024-11-26 16:15:53.082395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.459 [2024-11-26 16:15:53.099645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.459 [2024-11-26 16:15:53.099694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.718 [2024-11-26 16:15:53.115128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.718 [2024-11-26 16:15:53.115177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.718 [2024-11-26 16:15:53.131527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.718 [2024-11-26 16:15:53.131561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.718 12172.00 IOPS, 95.09 MiB/s [2024-11-26T16:15:53.371Z] [2024-11-26 16:15:53.148207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.718 [2024-11-26 16:15:53.148255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.718 [2024-11-26 16:15:53.163917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.718 [2024-11-26 16:15:53.163965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.718 [2024-11-26 16:15:53.174049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.718 [2024-11-26 16:15:53.174101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.718 [2024-11-26 16:15:53.188949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.718 [2024-11-26 16:15:53.189001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.718 [2024-11-26 16:15:53.199427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.719 [2024-11-26 16:15:53.199473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.719 [2024-11-26 16:15:53.213024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.719 [2024-11-26 16:15:53.213074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.719 [2024-11-26 16:15:53.227984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.719 [2024-11-26 16:15:53.228040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.719 [2024-11-26 16:15:53.239184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.719 [2024-11-26 16:15:53.239251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.719 [2024-11-26 16:15:53.255407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.719 [2024-11-26 16:15:53.255470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.719 [2024-11-26 16:15:53.271898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.719 [2024-11-26 16:15:53.271966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.719 [2024-11-26 16:15:53.289173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.719 [2024-11-26 16:15:53.289238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.719 [2024-11-26 16:15:53.305310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.719 [2024-11-26 16:15:53.305384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.719 [2024-11-26 16:15:53.323279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.719 [2024-11-26 16:15:53.323348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.719 [2024-11-26 16:15:53.337891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.719 [2024-11-26 16:15:53.337970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.719 [2024-11-26 16:15:53.354090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.719 [2024-11-26 16:15:53.354157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.370653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.370704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.386735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.386797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.404206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.404265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.419420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.419455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.428504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.428543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.444689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.444768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.454023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.454081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.470115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.470169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.480562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.480607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.494960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.495016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.506777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.506837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.522858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.522919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.539045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.539111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.557000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.557061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.573384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.573439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.589085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.589147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.604606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.604652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.979 [2024-11-26 16:15:53.622121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.979 [2024-11-26 16:15:53.622173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.636659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.636780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.653135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.653177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.667839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.667875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.683632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.683667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.701743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.701778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.717027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.717065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.727424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.727670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.742846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.743049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.759156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.759339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.775307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.775528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.792064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.792242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.809834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.810018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.825161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.825506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.835016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.835382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.851392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.851705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.867359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.867740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.244 [2024-11-26 16:15:53.885149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.244 [2024-11-26 16:15:53.885449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:53.906785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:53.907074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:53.922266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:53.922603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:53.931803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:53.932053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:53.947592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:53.947921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:53.962954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:53.963251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:53.975431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:53.975511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:53.992880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:53.993185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:54.008322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:54.008420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:54.023047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:54.023097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:54.039304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:54.039390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:54.055915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:54.056158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:54.071790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:54.071823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:54.081227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:54.081261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:54.096734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:54.096784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:54.113289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:54.113398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:54.130146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:54.130389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 12116.75 IOPS, 94.66 MiB/s [2024-11-26T16:15:54.180Z] [2024-11-26 16:15:54.147801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:54.147853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.527 [2024-11-26 16:15:54.163786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.527 [2024-11-26 16:15:54.163821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.173439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.173474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.189115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.189311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.198252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.198287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.214056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.214092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.229746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.229780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.241033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.241068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.256277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.256326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.268235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.268268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.284402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.284436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.301593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.301814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.317666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.317699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.336183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.336216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.350530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.350564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.366767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.366803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.383923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.383982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.398235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.398281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.413812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.413868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.791 [2024-11-26 16:15:54.422737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.791 [2024-11-26 16:15:54.422785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.439374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.439454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.455983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.456039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.472509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.472559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.490046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.490099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.503866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.503915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.518878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.519176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.535494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.535542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.550926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.550965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.566007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.566093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.582534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.582572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.598397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.598451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.616567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.616613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.630642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.630683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.645886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.645921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.655384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.655601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.670948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.671151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.051 [2024-11-26 16:15:54.687824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.051 [2024-11-26 16:15:54.687859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.310 [2024-11-26 16:15:54.704316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.310 [2024-11-26 16:15:54.704395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.310 [2024-11-26 16:15:54.722161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.310 [2024-11-26 16:15:54.722369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.310 [2024-11-26 16:15:54.737570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.310 [2024-11-26 16:15:54.737768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.310 [2024-11-26 16:15:54.747131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.310 [2024-11-26 16:15:54.747166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.310 [2024-11-26 16:15:54.762515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.310 [2024-11-26 16:15:54.762548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.310 [2024-11-26 16:15:54.777382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.310 [2024-11-26 16:15:54.777605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.310 [2024-11-26 16:15:54.793894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.310 [2024-11-26 16:15:54.793928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.310 [2024-11-26 16:15:54.811021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.310 [2024-11-26 16:15:54.811057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.310 [2024-11-26 16:15:54.826533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.310 [2024-11-26 16:15:54.826568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.310 [2024-11-26 16:15:54.844725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.310 [2024-11-26 16:15:54.844958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.310 [2024-11-26 16:15:54.860069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.310 [2024-11-26 16:15:54.860253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.310 [2024-11-26 16:15:54.877645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.310 [2024-11-26 16:15:54.877678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.310 [2024-11-26 16:15:54.895217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.310 [2024-11-26 16:15:54.895418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.310 [2024-11-26 16:15:54.910501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.310 [2024-11-26 16:15:54.910534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.310 [2024-11-26 16:15:54.928797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.310 [2024-11-26 16:15:54.928830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.310 [2024-11-26 16:15:54.943536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.310 [2024-11-26 16:15:54.943569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.570 [2024-11-26 16:15:54.960027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.570 [2024-11-26 16:15:54.960078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.570 [2024-11-26 16:15:54.975939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.570 [2024-11-26 16:15:54.975971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.570 [2024-11-26 16:15:54.993714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.570 [2024-11-26 16:15:54.993928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.570 [2024-11-26 16:15:55.008085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.570 [2024-11-26 16:15:55.008119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.570 [2024-11-26 16:15:55.025031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.570 [2024-11-26 16:15:55.025211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.570 [2024-11-26 16:15:55.039937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.570 [2024-11-26 16:15:55.040134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.570 [2024-11-26 16:15:55.049528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.570 [2024-11-26 16:15:55.049560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.570 [2024-11-26 16:15:55.064326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.570 [2024-11-26 16:15:55.064400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.570 [2024-11-26 16:15:55.080052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.570 [2024-11-26 16:15:55.080086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.570 [2024-11-26 16:15:55.098222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.570 [2024-11-26 16:15:55.098255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.570 [2024-11-26 16:15:55.113678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.570 [2024-11-26 16:15:55.113711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.570 [2024-11-26 16:15:55.122810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.570 [2024-11-26 16:15:55.122989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.570 [2024-11-26 16:15:55.138165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.570 [2024-11-26 16:15:55.138368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.570 12109.20 IOPS, 94.60 MiB/s [2024-11-26T16:15:55.223Z] [2024-11-26 16:15:55.152424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.570 [2024-11-26 16:15:55.152460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.570 00:10:29.570 Latency(us) 00:10:29.570 [2024-11-26T16:15:55.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.570 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:29.570 Nvme1n1 : 5.01 12111.91 94.62 0.00 0.00 10554.63 4230.05 19541.64 00:10:29.570 [2024-11-26T16:15:55.223Z] =================================================================================================================== 00:10:29.570 [2024-11-26T16:15:55.223Z] Total : 12111.91 94.62 0.00 0.00 10554.63 4230.05 19541.64 00:10:29.570 [2024-11-26 16:15:55.161845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.571 [2024-11-26 16:15:55.162025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.571 [2024-11-26 16:15:55.173850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.571 [2024-11-26 16:15:55.174066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.571 [2024-11-26 16:15:55.185881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.571 [2024-11-26 16:15:55.186169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.571 [2024-11-26 16:15:55.197879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.571 [2024-11-26 16:15:55.197922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.571 [2024-11-26 16:15:55.209879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.571 [2024-11-26 16:15:55.209921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.830 [2024-11-26 16:15:55.221884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.830 [2024-11-26 16:15:55.221926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.830 [2024-11-26 16:15:55.233876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.830 [2024-11-26 16:15:55.233916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.830 [2024-11-26 16:15:55.245903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.830 [2024-11-26 16:15:55.245947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.830 [2024-11-26 16:15:55.257879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.830 [2024-11-26 16:15:55.257916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.830 [2024-11-26 16:15:55.269877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.830 [2024-11-26 16:15:55.269903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.830 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (77292) - No such process 00:10:29.830 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 77292 00:10:29.830 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.830 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.830 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.830 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.830 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:29.830 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.830 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.830 delay0 00:10:29.830 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.830 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:29.830 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.830 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.830 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.830 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:29.830 [2024-11-26 16:15:55.459022] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:36.397 Initializing NVMe Controllers 00:10:36.397 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:36.397 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:36.397 Initialization complete. Launching workers. 00:10:36.397 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 207 00:10:36.397 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 494, failed to submit 33 00:10:36.397 success 360, unsuccessful 134, failed 0 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.397 rmmod nvme_tcp 00:10:36.397 rmmod nvme_fabrics 00:10:36.397 rmmod nvme_keyring 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 77149 ']' 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 77149 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 77149 ']' 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 77149 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77149 00:10:36.397 killing process with pid 77149 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77149' 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 77149 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 77149 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.397 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.397 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:36.397 00:10:36.397 real 0m23.776s 00:10:36.397 user 0m38.925s 00:10:36.397 sys 0m6.647s 00:10:36.397 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.397 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.397 ************************************ 00:10:36.397 END TEST nvmf_zcopy 00:10:36.397 ************************************ 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:36.657 ************************************ 00:10:36.657 START TEST nvmf_nmic 00:10:36.657 ************************************ 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:36.657 * Looking for test storage... 00:10:36.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:36.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.657 --rc genhtml_branch_coverage=1 00:10:36.657 --rc genhtml_function_coverage=1 00:10:36.657 --rc genhtml_legend=1 00:10:36.657 --rc geninfo_all_blocks=1 00:10:36.657 --rc geninfo_unexecuted_blocks=1 00:10:36.657 00:10:36.657 ' 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:36.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.657 --rc genhtml_branch_coverage=1 00:10:36.657 --rc genhtml_function_coverage=1 00:10:36.657 --rc genhtml_legend=1 00:10:36.657 --rc geninfo_all_blocks=1 00:10:36.657 --rc geninfo_unexecuted_blocks=1 00:10:36.657 00:10:36.657 ' 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:36.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.657 --rc genhtml_branch_coverage=1 00:10:36.657 --rc genhtml_function_coverage=1 00:10:36.657 --rc genhtml_legend=1 00:10:36.657 --rc geninfo_all_blocks=1 00:10:36.657 --rc geninfo_unexecuted_blocks=1 00:10:36.657 00:10:36.657 ' 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:36.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.657 --rc genhtml_branch_coverage=1 00:10:36.657 --rc genhtml_function_coverage=1 00:10:36.657 --rc genhtml_legend=1 00:10:36.657 --rc geninfo_all_blocks=1 00:10:36.657 --rc geninfo_unexecuted_blocks=1 00:10:36.657 00:10:36.657 ' 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:36.657 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.658 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:36.658 Cannot find device "nvmf_init_br" 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:36.658 Cannot find device "nvmf_init_br2" 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:36.658 Cannot find device "nvmf_tgt_br" 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:36.658 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:36.918 Cannot find device "nvmf_tgt_br2" 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:36.918 Cannot find device "nvmf_init_br" 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:36.918 Cannot find device "nvmf_init_br2" 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:36.918 Cannot find device "nvmf_tgt_br" 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:36.918 Cannot find device "nvmf_tgt_br2" 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:36.918 Cannot find device "nvmf_br" 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:36.918 Cannot find device "nvmf_init_if" 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:36.918 Cannot find device "nvmf_init_if2" 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:36.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:36.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:36.918 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:37.178 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:37.178 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:10:37.178 00:10:37.178 --- 10.0.0.3 ping statistics --- 00:10:37.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.178 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:37.178 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:37.178 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:10:37.178 00:10:37.178 --- 10.0.0.4 ping statistics --- 00:10:37.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.178 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:37.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:37.178 00:10:37.178 --- 10.0.0.1 ping statistics --- 00:10:37.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.178 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:37.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:10:37.178 00:10:37.178 --- 10.0.0.2 ping statistics --- 00:10:37.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.178 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=77663 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 77663 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 77663 ']' 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.178 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.178 [2024-11-26 16:16:02.721079] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:10:37.178 [2024-11-26 16:16:02.721167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.437 [2024-11-26 16:16:02.875490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.437 [2024-11-26 16:16:02.903242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.437 [2024-11-26 16:16:02.903308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.437 [2024-11-26 16:16:02.903323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.437 [2024-11-26 16:16:02.903333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.437 [2024-11-26 16:16:02.903355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.437 [2024-11-26 16:16:02.904205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.437 [2024-11-26 16:16:02.904387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.437 [2024-11-26 16:16:02.904487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.437 [2024-11-26 16:16:02.904494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.437 [2024-11-26 16:16:02.943521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:37.437 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.437 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:37.437 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:37.437 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.437 16:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.437 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.437 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:37.437 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.437 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.437 [2024-11-26 16:16:03.040759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.437 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.437 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:37.437 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.437 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.437 Malloc0 00:10:37.437 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.437 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:37.437 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.437 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.695 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.695 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:37.695 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.695 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.695 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.696 [2024-11-26 16:16:03.106217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.696 test case1: single bdev can't be used in multiple subsystems 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.696 [2024-11-26 16:16:03.130018] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:37.696 [2024-11-26 16:16:03.130074] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:37.696 [2024-11-26 16:16:03.130087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.696 request: 00:10:37.696 { 00:10:37.696 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:37.696 "namespace": { 00:10:37.696 "bdev_name": "Malloc0", 00:10:37.696 "no_auto_visible": false 00:10:37.696 }, 00:10:37.696 "method": "nvmf_subsystem_add_ns", 00:10:37.696 "req_id": 1 00:10:37.696 } 00:10:37.696 Got JSON-RPC error response 00:10:37.696 response: 00:10:37.696 { 00:10:37.696 "code": -32602, 00:10:37.696 "message": "Invalid parameters" 00:10:37.696 } 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:37.696 Adding namespace failed - expected result. 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:37.696 test case2: host connect to nvmf target in multiple paths 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.696 [2024-11-26 16:16:03.146153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:37.696 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:37.954 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:37.954 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:37.954 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:37.954 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:37.954 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:39.855 16:16:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:39.855 16:16:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:39.855 16:16:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:39.855 16:16:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:39.855 16:16:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:39.855 16:16:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:39.855 16:16:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:39.855 [global] 00:10:39.855 thread=1 00:10:39.855 invalidate=1 00:10:39.855 rw=write 00:10:39.855 time_based=1 00:10:39.855 runtime=1 00:10:39.855 ioengine=libaio 00:10:39.855 direct=1 00:10:39.855 bs=4096 00:10:39.855 iodepth=1 00:10:39.855 norandommap=0 00:10:39.855 numjobs=1 00:10:39.855 00:10:39.855 verify_dump=1 00:10:39.855 verify_backlog=512 00:10:39.855 verify_state_save=0 00:10:39.855 do_verify=1 00:10:39.855 verify=crc32c-intel 00:10:39.855 [job0] 00:10:39.855 filename=/dev/nvme0n1 00:10:39.855 Could not set queue depth (nvme0n1) 00:10:40.114 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.114 fio-3.35 00:10:40.114 Starting 1 thread 00:10:41.491 00:10:41.491 job0: (groupid=0, jobs=1): err= 0: pid=77747: Tue Nov 26 16:16:06 2024 00:10:41.491 read: IOPS=2964, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec) 00:10:41.491 slat (nsec): min=10616, max=77694, avg=15109.11, stdev=5837.88 00:10:41.491 clat (usec): min=133, max=853, avg=179.38, stdev=30.45 00:10:41.491 lat (usec): min=144, max=868, avg=194.49, stdev=31.51 00:10:41.491 clat percentiles (usec): 00:10:41.491 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 159], 00:10:41.491 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:10:41.491 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 212], 95.00th=[ 225], 00:10:41.491 | 99.00th=[ 255], 99.50th=[ 302], 99.90th=[ 519], 99.95th=[ 594], 00:10:41.491 | 99.99th=[ 857] 00:10:41.491 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:41.491 slat (usec): min=14, max=108, avg=22.71, stdev= 8.00 00:10:41.491 clat (usec): min=80, max=339, avg=111.53, stdev=20.72 00:10:41.491 lat (usec): min=97, max=393, avg=134.24, stdev=23.34 00:10:41.491 clat percentiles (usec): 00:10:41.491 | 1.00th=[ 85], 5.00th=[ 89], 10.00th=[ 92], 20.00th=[ 96], 00:10:41.491 | 30.00th=[ 99], 40.00th=[ 103], 50.00th=[ 108], 60.00th=[ 113], 00:10:41.491 | 70.00th=[ 118], 80.00th=[ 125], 90.00th=[ 139], 95.00th=[ 149], 00:10:41.491 | 99.00th=[ 176], 99.50th=[ 198], 99.90th=[ 239], 99.95th=[ 334], 00:10:41.491 | 99.99th=[ 338] 00:10:41.491 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:41.491 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:41.491 lat (usec) : 100=16.82%, 250=82.53%, 500=0.60%, 750=0.03%, 1000=0.02% 00:10:41.491 cpu : usr=2.50%, sys=8.80%, ctx=6039, majf=0, minf=5 00:10:41.491 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.491 issued rwts: total=2967,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.491 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.491 00:10:41.491 Run status group 0 (all jobs): 00:10:41.491 READ: bw=11.6MiB/s (12.1MB/s), 11.6MiB/s-11.6MiB/s (12.1MB/s-12.1MB/s), io=11.6MiB (12.2MB), run=1001-1001msec 00:10:41.491 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:41.491 00:10:41.491 Disk stats (read/write): 00:10:41.491 nvme0n1: ios=2610/2937, merge=0/0, ticks=515/372, in_queue=887, util=91.48% 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:41.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.491 rmmod nvme_tcp 00:10:41.491 rmmod nvme_fabrics 00:10:41.491 rmmod nvme_keyring 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 77663 ']' 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 77663 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 77663 ']' 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 77663 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77663 00:10:41.491 killing process with pid 77663 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77663' 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 77663 00:10:41.491 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 77663 00:10:41.491 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.491 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.491 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.491 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:41.491 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:41.491 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.491 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.491 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.491 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:41.491 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:41.491 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:41.491 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:41.751 00:10:41.751 real 0m5.286s 00:10:41.751 user 0m15.490s 00:10:41.751 sys 0m2.324s 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.751 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:41.751 ************************************ 00:10:41.751 END TEST nvmf_nmic 00:10:41.751 ************************************ 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:42.023 ************************************ 00:10:42.023 START TEST nvmf_fio_target 00:10:42.023 ************************************ 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:42.023 * Looking for test storage... 00:10:42.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:42.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.023 --rc genhtml_branch_coverage=1 00:10:42.023 --rc genhtml_function_coverage=1 00:10:42.023 --rc genhtml_legend=1 00:10:42.023 --rc geninfo_all_blocks=1 00:10:42.023 --rc geninfo_unexecuted_blocks=1 00:10:42.023 00:10:42.023 ' 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:42.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.023 --rc genhtml_branch_coverage=1 00:10:42.023 --rc genhtml_function_coverage=1 00:10:42.023 --rc genhtml_legend=1 00:10:42.023 --rc geninfo_all_blocks=1 00:10:42.023 --rc geninfo_unexecuted_blocks=1 00:10:42.023 00:10:42.023 ' 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:42.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.023 --rc genhtml_branch_coverage=1 00:10:42.023 --rc genhtml_function_coverage=1 00:10:42.023 --rc genhtml_legend=1 00:10:42.023 --rc geninfo_all_blocks=1 00:10:42.023 --rc geninfo_unexecuted_blocks=1 00:10:42.023 00:10:42.023 ' 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:42.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.023 --rc genhtml_branch_coverage=1 00:10:42.023 --rc genhtml_function_coverage=1 00:10:42.023 --rc genhtml_legend=1 00:10:42.023 --rc geninfo_all_blocks=1 00:10:42.023 --rc geninfo_unexecuted_blocks=1 00:10:42.023 00:10:42.023 ' 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.023 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:42.024 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:42.024 Cannot find device "nvmf_init_br" 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:42.024 Cannot find device "nvmf_init_br2" 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:42.024 Cannot find device "nvmf_tgt_br" 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:42.024 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:42.284 Cannot find device "nvmf_tgt_br2" 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:42.284 Cannot find device "nvmf_init_br" 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:42.284 Cannot find device "nvmf_init_br2" 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:42.284 Cannot find device "nvmf_tgt_br" 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:42.284 Cannot find device "nvmf_tgt_br2" 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:42.284 Cannot find device "nvmf_br" 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:42.284 Cannot find device "nvmf_init_if" 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:42.284 Cannot find device "nvmf_init_if2" 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:42.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:42.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:42.284 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:42.285 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:42.551 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:42.551 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:42.551 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:42.551 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:42.551 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:42.551 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:42.551 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:42.551 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:42.551 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:42.551 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:42.551 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:10:42.551 00:10:42.551 --- 10.0.0.3 ping statistics --- 00:10:42.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.551 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:42.551 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:42.551 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:42.551 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.105 ms 00:10:42.551 00:10:42.551 --- 10.0.0.4 ping statistics --- 00:10:42.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.551 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:10:42.551 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:42.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:10:42.551 00:10:42.551 --- 10.0.0.1 ping statistics --- 00:10:42.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.551 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:42.551 16:16:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:42.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:10:42.551 00:10:42.551 --- 10.0.0.2 ping statistics --- 00:10:42.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.551 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:42.551 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.551 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:10:42.551 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:42.551 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.551 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:42.551 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:42.551 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.551 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:42.551 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:42.551 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:42.551 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:42.551 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:42.551 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.552 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=77982 00:10:42.552 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:42.552 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 77982 00:10:42.552 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 77982 ']' 00:10:42.552 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.552 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.552 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.552 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.552 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.552 [2024-11-26 16:16:08.085942] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:10:42.552 [2024-11-26 16:16:08.086036] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.810 [2024-11-26 16:16:08.227277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.810 [2024-11-26 16:16:08.248234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.810 [2024-11-26 16:16:08.248295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.810 [2024-11-26 16:16:08.248305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.810 [2024-11-26 16:16:08.248312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.810 [2024-11-26 16:16:08.248318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.810 [2024-11-26 16:16:08.249208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.811 [2024-11-26 16:16:08.249333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.811 [2024-11-26 16:16:08.249433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.811 [2024-11-26 16:16:08.249436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.811 [2024-11-26 16:16:08.279822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:42.811 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.811 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:42.811 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:42.811 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:42.811 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.811 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.811 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:43.069 [2024-11-26 16:16:08.662918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.069 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.328 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:43.328 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.895 16:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:43.895 16:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:44.152 16:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:44.152 16:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:44.410 16:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:44.411 16:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:44.669 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:44.927 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:44.927 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:45.204 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:45.204 16:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:45.462 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:45.463 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:46.030 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:46.030 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:46.030 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:46.289 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:46.289 16:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:46.548 16:16:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:46.806 [2024-11-26 16:16:12.362840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:46.806 16:16:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:47.064 16:16:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:47.322 16:16:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:47.579 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:47.579 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:47.579 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:47.579 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:47.579 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:47.579 16:16:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:49.482 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:49.482 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:49.482 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:49.482 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:49.482 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:49.482 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:49.482 16:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:49.482 [global] 00:10:49.482 thread=1 00:10:49.482 invalidate=1 00:10:49.482 rw=write 00:10:49.482 time_based=1 00:10:49.482 runtime=1 00:10:49.482 ioengine=libaio 00:10:49.482 direct=1 00:10:49.482 bs=4096 00:10:49.482 iodepth=1 00:10:49.482 norandommap=0 00:10:49.482 numjobs=1 00:10:49.482 00:10:49.482 verify_dump=1 00:10:49.482 verify_backlog=512 00:10:49.482 verify_state_save=0 00:10:49.482 do_verify=1 00:10:49.482 verify=crc32c-intel 00:10:49.482 [job0] 00:10:49.482 filename=/dev/nvme0n1 00:10:49.482 [job1] 00:10:49.482 filename=/dev/nvme0n2 00:10:49.482 [job2] 00:10:49.482 filename=/dev/nvme0n3 00:10:49.482 [job3] 00:10:49.482 filename=/dev/nvme0n4 00:10:49.740 Could not set queue depth (nvme0n1) 00:10:49.740 Could not set queue depth (nvme0n2) 00:10:49.740 Could not set queue depth (nvme0n3) 00:10:49.740 Could not set queue depth (nvme0n4) 00:10:49.740 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.740 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.740 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.740 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.740 fio-3.35 00:10:49.740 Starting 4 threads 00:10:51.116 00:10:51.116 job0: (groupid=0, jobs=1): err= 0: pid=78164: Tue Nov 26 16:16:16 2024 00:10:51.116 read: IOPS=1479, BW=5918KiB/s (6060kB/s)(5924KiB/1001msec) 00:10:51.116 slat (nsec): min=9760, max=54968, avg=13825.37, stdev=5038.42 00:10:51.116 clat (usec): min=179, max=641, avg=370.80, stdev=63.44 00:10:51.116 lat (usec): min=204, max=653, avg=384.63, stdev=65.34 00:10:51.116 clat percentiles (usec): 00:10:51.116 | 1.00th=[ 289], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 330], 00:10:51.116 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 363], 00:10:51.116 | 70.00th=[ 371], 80.00th=[ 388], 90.00th=[ 461], 95.00th=[ 537], 00:10:51.116 | 99.00th=[ 586], 99.50th=[ 594], 99.90th=[ 611], 99.95th=[ 644], 00:10:51.116 | 99.99th=[ 644] 00:10:51.116 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:51.116 slat (nsec): min=12860, max=75375, avg=20602.32, stdev=5435.36 00:10:51.116 clat (usec): min=161, max=1808, avg=256.36, stdev=56.25 00:10:51.116 lat (usec): min=178, max=1825, avg=276.96, stdev=56.43 00:10:51.116 clat percentiles (usec): 00:10:51.116 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 198], 20.00th=[ 217], 00:10:51.116 | 30.00th=[ 237], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 273], 00:10:51.116 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:10:51.116 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 791], 99.95th=[ 1811], 00:10:51.116 | 99.99th=[ 1811] 00:10:51.116 bw ( KiB/s): min= 8192, max= 8192, per=26.69%, avg=8192.00, stdev= 0.00, samples=1 00:10:51.116 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:51.116 lat (usec) : 250=19.52%, 500=76.17%, 750=4.24%, 1000=0.03% 00:10:51.116 lat (msec) : 2=0.03% 00:10:51.116 cpu : usr=1.60%, sys=4.10%, ctx=3018, majf=0, minf=19 00:10:51.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.116 issued rwts: total=1481,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.116 job1: (groupid=0, jobs=1): err= 0: pid=78165: Tue Nov 26 16:16:16 2024 00:10:51.116 read: IOPS=1355, BW=5423KiB/s (5553kB/s)(5428KiB/1001msec) 00:10:51.116 slat (nsec): min=13842, max=92605, avg=20851.48, stdev=5976.59 00:10:51.116 clat (usec): min=173, max=733, avg=370.89, stdev=79.78 00:10:51.116 lat (usec): min=191, max=763, avg=391.74, stdev=81.89 00:10:51.116 clat percentiles (usec): 00:10:51.116 | 1.00th=[ 285], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 326], 00:10:51.116 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 355], 00:10:51.116 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 498], 95.00th=[ 545], 00:10:51.116 | 99.00th=[ 685], 99.50th=[ 701], 99.90th=[ 725], 99.95th=[ 734], 00:10:51.116 | 99.99th=[ 734] 00:10:51.116 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:51.116 slat (usec): min=20, max=103, avg=34.29, stdev=10.49 00:10:51.116 clat (usec): min=93, max=677, avg=265.87, stdev=110.17 00:10:51.116 lat (usec): min=119, max=741, avg=300.16, stdev=116.60 00:10:51.116 clat percentiles (usec): 00:10:51.116 | 1.00th=[ 108], 5.00th=[ 115], 10.00th=[ 122], 20.00th=[ 141], 00:10:51.116 | 30.00th=[ 235], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 269], 00:10:51.116 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 457], 95.00th=[ 486], 00:10:51.116 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 668], 99.95th=[ 676], 00:10:51.116 | 99.99th=[ 676] 00:10:51.116 bw ( KiB/s): min= 7153, max= 7153, per=23.31%, avg=7153.00, stdev= 0.00, samples=1 00:10:51.116 iops : min= 1788, max= 1788, avg=1788.00, stdev= 0.00, samples=1 00:10:51.116 lat (usec) : 100=0.07%, 250=21.36%, 500=72.38%, 750=6.19% 00:10:51.116 cpu : usr=1.70%, sys=6.50%, ctx=2897, majf=0, minf=13 00:10:51.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.116 issued rwts: total=1357,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.116 job2: (groupid=0, jobs=1): err= 0: pid=78166: Tue Nov 26 16:16:16 2024 00:10:51.116 read: IOPS=2623, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1001msec) 00:10:51.116 slat (nsec): min=10935, max=57869, avg=13673.27, stdev=3658.05 00:10:51.116 clat (usec): min=140, max=1729, avg=181.00, stdev=36.09 00:10:51.116 lat (usec): min=151, max=1742, avg=194.67, stdev=36.31 00:10:51.116 clat percentiles (usec): 00:10:51.116 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:10:51.116 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 184], 00:10:51.116 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 217], 00:10:51.116 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 269], 99.95th=[ 433], 00:10:51.116 | 99.99th=[ 1729] 00:10:51.116 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:51.116 slat (usec): min=14, max=101, avg=21.36, stdev= 5.81 00:10:51.116 clat (usec): min=99, max=522, avg=134.89, stdev=17.58 00:10:51.116 lat (usec): min=117, max=540, avg=156.25, stdev=18.47 00:10:51.116 clat percentiles (usec): 00:10:51.116 | 1.00th=[ 106], 5.00th=[ 114], 10.00th=[ 118], 20.00th=[ 122], 00:10:51.116 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 137], 00:10:51.116 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 165], 00:10:51.116 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 204], 99.95th=[ 212], 00:10:51.116 | 99.99th=[ 523] 00:10:51.116 bw ( KiB/s): min=12263, max=12263, per=39.96%, avg=12263.00, stdev= 0.00, samples=1 00:10:51.117 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:51.117 lat (usec) : 100=0.02%, 250=99.77%, 500=0.18%, 750=0.02% 00:10:51.117 lat (msec) : 2=0.02% 00:10:51.117 cpu : usr=2.00%, sys=8.00%, ctx=5699, majf=0, minf=3 00:10:51.117 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.117 issued rwts: total=2626,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.117 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.117 job3: (groupid=0, jobs=1): err= 0: pid=78167: Tue Nov 26 16:16:16 2024 00:10:51.117 read: IOPS=1478, BW=5914KiB/s (6056kB/s)(5920KiB/1001msec) 00:10:51.117 slat (nsec): min=10101, max=65417, avg=16606.42, stdev=5572.90 00:10:51.117 clat (usec): min=239, max=627, avg=367.94, stdev=64.32 00:10:51.117 lat (usec): min=258, max=651, avg=384.55, stdev=65.22 00:10:51.117 clat percentiles (usec): 00:10:51.117 | 1.00th=[ 289], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 326], 00:10:51.117 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 359], 00:10:51.117 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 469], 95.00th=[ 537], 00:10:51.117 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[ 627], 99.95th=[ 627], 00:10:51.117 | 99.99th=[ 627] 00:10:51.117 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:51.117 slat (usec): min=13, max=106, avg=25.51, stdev= 6.35 00:10:51.117 clat (usec): min=156, max=1894, avg=251.21, stdev=57.22 00:10:51.117 lat (usec): min=177, max=1916, avg=276.72, stdev=57.49 00:10:51.117 clat percentiles (usec): 00:10:51.117 | 1.00th=[ 172], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 210], 00:10:51.117 | 30.00th=[ 231], 40.00th=[ 249], 50.00th=[ 260], 60.00th=[ 269], 00:10:51.117 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 00:10:51.117 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 701], 99.95th=[ 1893], 00:10:51.117 | 99.99th=[ 1893] 00:10:51.117 bw ( KiB/s): min= 8192, max= 8192, per=26.69%, avg=8192.00, stdev= 0.00, samples=1 00:10:51.117 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:51.117 lat (usec) : 250=21.22%, 500=74.34%, 750=4.41% 00:10:51.117 lat (msec) : 2=0.03% 00:10:51.117 cpu : usr=1.40%, sys=5.50%, ctx=3016, majf=0, minf=3 00:10:51.117 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.117 issued rwts: total=1480,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.117 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.117 00:10:51.117 Run status group 0 (all jobs): 00:10:51.117 READ: bw=27.1MiB/s (28.4MB/s), 5423KiB/s-10.2MiB/s (5553kB/s-10.7MB/s), io=27.1MiB (28.4MB), run=1001-1001msec 00:10:51.117 WRITE: bw=30.0MiB/s (31.4MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:10:51.117 00:10:51.117 Disk stats (read/write): 00:10:51.117 nvme0n1: ios=1157/1536, merge=0/0, ticks=406/365, in_queue=771, util=88.08% 00:10:51.117 nvme0n2: ios=1073/1494, merge=0/0, ticks=440/420, in_queue=860, util=89.68% 00:10:51.117 nvme0n3: ios=2312/2560, merge=0/0, ticks=439/372, in_queue=811, util=88.96% 00:10:51.117 nvme0n4: ios=1113/1536, merge=0/0, ticks=414/397, in_queue=811, util=89.70% 00:10:51.117 16:16:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:51.117 [global] 00:10:51.117 thread=1 00:10:51.117 invalidate=1 00:10:51.117 rw=randwrite 00:10:51.117 time_based=1 00:10:51.117 runtime=1 00:10:51.117 ioengine=libaio 00:10:51.117 direct=1 00:10:51.117 bs=4096 00:10:51.117 iodepth=1 00:10:51.117 norandommap=0 00:10:51.117 numjobs=1 00:10:51.117 00:10:51.117 verify_dump=1 00:10:51.117 verify_backlog=512 00:10:51.117 verify_state_save=0 00:10:51.117 do_verify=1 00:10:51.117 verify=crc32c-intel 00:10:51.117 [job0] 00:10:51.117 filename=/dev/nvme0n1 00:10:51.117 [job1] 00:10:51.117 filename=/dev/nvme0n2 00:10:51.117 [job2] 00:10:51.117 filename=/dev/nvme0n3 00:10:51.117 [job3] 00:10:51.117 filename=/dev/nvme0n4 00:10:51.117 Could not set queue depth (nvme0n1) 00:10:51.117 Could not set queue depth (nvme0n2) 00:10:51.117 Could not set queue depth (nvme0n3) 00:10:51.117 Could not set queue depth (nvme0n4) 00:10:51.117 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.117 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.117 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.117 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.117 fio-3.35 00:10:51.117 Starting 4 threads 00:10:52.492 00:10:52.492 job0: (groupid=0, jobs=1): err= 0: pid=78220: Tue Nov 26 16:16:17 2024 00:10:52.492 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:52.492 slat (nsec): min=8463, max=59357, avg=13944.64, stdev=4615.15 00:10:52.492 clat (usec): min=186, max=626, avg=321.49, stdev=81.54 00:10:52.492 lat (usec): min=202, max=657, avg=335.44, stdev=81.94 00:10:52.492 clat percentiles (usec): 00:10:52.492 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 247], 00:10:52.492 | 30.00th=[ 260], 40.00th=[ 273], 50.00th=[ 310], 60.00th=[ 334], 00:10:52.492 | 70.00th=[ 363], 80.00th=[ 392], 90.00th=[ 449], 95.00th=[ 478], 00:10:52.492 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 586], 99.95th=[ 627], 00:10:52.492 | 99.99th=[ 627] 00:10:52.492 write: IOPS=1989, BW=7956KiB/s (8147kB/s)(7964KiB/1001msec); 0 zone resets 00:10:52.492 slat (usec): min=10, max=499, avg=26.42, stdev=18.96 00:10:52.492 clat (usec): min=111, max=2208, avg=213.45, stdev=89.02 00:10:52.492 lat (usec): min=131, max=2229, avg=239.87, stdev=94.86 00:10:52.492 clat percentiles (usec): 00:10:52.492 | 1.00th=[ 120], 5.00th=[ 135], 10.00th=[ 145], 20.00th=[ 165], 00:10:52.492 | 30.00th=[ 182], 40.00th=[ 194], 50.00th=[ 202], 60.00th=[ 215], 00:10:52.492 | 70.00th=[ 225], 80.00th=[ 243], 90.00th=[ 297], 95.00th=[ 318], 00:10:52.492 | 99.00th=[ 359], 99.50th=[ 412], 99.90th=[ 1614], 99.95th=[ 2212], 00:10:52.492 | 99.99th=[ 2212] 00:10:52.492 bw ( KiB/s): min= 8192, max= 8192, per=23.72%, avg=8192.00, stdev= 0.00, samples=1 00:10:52.492 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:52.492 lat (usec) : 250=55.88%, 500=43.07%, 750=0.88%, 1000=0.03% 00:10:52.492 lat (msec) : 2=0.11%, 4=0.03% 00:10:52.492 cpu : usr=1.70%, sys=6.00%, ctx=3538, majf=0, minf=11 00:10:52.492 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.492 issued rwts: total=1536,1991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.492 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.492 job1: (groupid=0, jobs=1): err= 0: pid=78221: Tue Nov 26 16:16:17 2024 00:10:52.492 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:52.492 slat (nsec): min=12245, max=85387, avg=23984.39, stdev=12979.61 00:10:52.492 clat (usec): min=183, max=1043, avg=370.88, stdev=138.25 00:10:52.492 lat (usec): min=197, max=1068, avg=394.87, stdev=148.32 00:10:52.492 clat percentiles (usec): 00:10:52.492 | 1.00th=[ 219], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 247], 00:10:52.492 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 441], 00:10:52.492 | 70.00th=[ 478], 80.00th=[ 523], 90.00th=[ 570], 95.00th=[ 611], 00:10:52.492 | 99.00th=[ 644], 99.50th=[ 652], 99.90th=[ 693], 99.95th=[ 1045], 00:10:52.492 | 99.99th=[ 1045] 00:10:52.492 write: IOPS=1889, BW=7556KiB/s (7738kB/s)(7564KiB/1001msec); 0 zone resets 00:10:52.492 slat (usec): min=15, max=117, avg=24.51, stdev= 6.52 00:10:52.492 clat (usec): min=89, max=391, avg=179.10, stdev=51.65 00:10:52.492 lat (usec): min=108, max=424, avg=203.61, stdev=53.20 00:10:52.492 clat percentiles (usec): 00:10:52.492 | 1.00th=[ 100], 5.00th=[ 109], 10.00th=[ 114], 20.00th=[ 124], 00:10:52.492 | 30.00th=[ 143], 40.00th=[ 167], 50.00th=[ 180], 60.00th=[ 190], 00:10:52.492 | 70.00th=[ 202], 80.00th=[ 225], 90.00th=[ 251], 95.00th=[ 265], 00:10:52.492 | 99.00th=[ 310], 99.50th=[ 338], 99.90th=[ 379], 99.95th=[ 392], 00:10:52.492 | 99.99th=[ 392] 00:10:52.492 bw ( KiB/s): min= 8192, max= 8192, per=23.72%, avg=8192.00, stdev= 0.00, samples=1 00:10:52.492 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:52.492 lat (usec) : 100=0.58%, 250=59.47%, 500=28.92%, 750=11.00% 00:10:52.492 lat (msec) : 2=0.03% 00:10:52.492 cpu : usr=1.40%, sys=7.10%, ctx=3427, majf=0, minf=11 00:10:52.492 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.492 issued rwts: total=1536,1891,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.492 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.492 job2: (groupid=0, jobs=1): err= 0: pid=78222: Tue Nov 26 16:16:17 2024 00:10:52.492 read: IOPS=1921, BW=7684KiB/s (7869kB/s)(7692KiB/1001msec) 00:10:52.492 slat (nsec): min=9907, max=60547, avg=14891.43, stdev=4903.86 00:10:52.492 clat (usec): min=147, max=2384, avg=304.00, stdev=121.77 00:10:52.492 lat (usec): min=162, max=2399, avg=318.89, stdev=122.86 00:10:52.492 clat percentiles (usec): 00:10:52.492 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 190], 00:10:52.492 | 30.00th=[ 245], 40.00th=[ 269], 50.00th=[ 302], 60.00th=[ 330], 00:10:52.492 | 70.00th=[ 355], 80.00th=[ 383], 90.00th=[ 433], 95.00th=[ 465], 00:10:52.492 | 99.00th=[ 529], 99.50th=[ 570], 99.90th=[ 1647], 99.95th=[ 2376], 00:10:52.492 | 99.99th=[ 2376] 00:10:52.492 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:52.492 slat (nsec): min=11543, max=79066, avg=20471.90, stdev=6142.07 00:10:52.492 clat (usec): min=107, max=2268, avg=165.11, stdev=72.60 00:10:52.492 lat (usec): min=128, max=2292, avg=185.58, stdev=72.29 00:10:52.492 clat percentiles (usec): 00:10:52.492 | 1.00th=[ 115], 5.00th=[ 123], 10.00th=[ 127], 20.00th=[ 133], 00:10:52.492 | 30.00th=[ 139], 40.00th=[ 145], 50.00th=[ 153], 60.00th=[ 163], 00:10:52.492 | 70.00th=[ 176], 80.00th=[ 192], 90.00th=[ 219], 95.00th=[ 237], 00:10:52.492 | 99.00th=[ 273], 99.50th=[ 302], 99.90th=[ 717], 99.95th=[ 1975], 00:10:52.492 | 99.99th=[ 2278] 00:10:52.492 bw ( KiB/s): min= 8336, max= 8336, per=24.14%, avg=8336.00, stdev= 0.00, samples=1 00:10:52.492 iops : min= 2084, max= 2084, avg=2084.00, stdev= 0.00, samples=1 00:10:52.492 lat (usec) : 250=65.85%, 500=33.19%, 750=0.68%, 1000=0.05% 00:10:52.492 lat (msec) : 2=0.18%, 4=0.05% 00:10:52.492 cpu : usr=1.50%, sys=5.80%, ctx=3977, majf=0, minf=13 00:10:52.492 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.492 issued rwts: total=1923,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.492 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.492 job3: (groupid=0, jobs=1): err= 0: pid=78223: Tue Nov 26 16:16:17 2024 00:10:52.492 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:52.492 slat (nsec): min=8423, max=49481, avg=12919.10, stdev=3841.50 00:10:52.492 clat (usec): min=141, max=559, avg=195.55, stdev=48.83 00:10:52.492 lat (usec): min=152, max=586, avg=208.47, stdev=48.95 00:10:52.492 clat percentiles (usec): 00:10:52.492 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:10:52.492 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 184], 00:10:52.492 | 70.00th=[ 196], 80.00th=[ 231], 90.00th=[ 265], 95.00th=[ 310], 00:10:52.492 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 553], 99.95th=[ 562], 00:10:52.492 | 99.99th=[ 562] 00:10:52.492 write: IOPS=2710, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec); 0 zone resets 00:10:52.492 slat (nsec): min=14700, max=97556, avg=21140.38, stdev=6184.83 00:10:52.492 clat (usec): min=99, max=2138, avg=147.83, stdev=54.12 00:10:52.492 lat (usec): min=118, max=2157, avg=168.97, stdev=54.91 00:10:52.492 clat percentiles (usec): 00:10:52.492 | 1.00th=[ 105], 5.00th=[ 111], 10.00th=[ 116], 20.00th=[ 122], 00:10:52.492 | 30.00th=[ 127], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 143], 00:10:52.492 | 70.00th=[ 153], 80.00th=[ 172], 90.00th=[ 196], 95.00th=[ 217], 00:10:52.492 | 99.00th=[ 255], 99.50th=[ 273], 99.90th=[ 750], 99.95th=[ 807], 00:10:52.492 | 99.99th=[ 2147] 00:10:52.492 bw ( KiB/s): min=12288, max=12288, per=35.58%, avg=12288.00, stdev= 0.00, samples=1 00:10:52.492 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:52.492 lat (usec) : 100=0.02%, 250=92.93%, 500=6.90%, 750=0.11%, 1000=0.02% 00:10:52.492 lat (msec) : 4=0.02% 00:10:52.492 cpu : usr=1.40%, sys=7.90%, ctx=5273, majf=0, minf=14 00:10:52.492 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.492 issued rwts: total=2560,2713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.493 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.493 00:10:52.493 Run status group 0 (all jobs): 00:10:52.493 READ: bw=29.5MiB/s (30.9MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=29.5MiB (30.9MB), run=1001-1001msec 00:10:52.493 WRITE: bw=33.7MiB/s (35.4MB/s), 7556KiB/s-10.6MiB/s (7738kB/s-11.1MB/s), io=33.8MiB (35.4MB), run=1001-1001msec 00:10:52.493 00:10:52.493 Disk stats (read/write): 00:10:52.493 nvme0n1: ios=1436/1536, merge=0/0, ticks=484/331, in_queue=815, util=88.38% 00:10:52.493 nvme0n2: ios=1366/1536, merge=0/0, ticks=555/293, in_queue=848, util=89.48% 00:10:52.493 nvme0n3: ios=1553/1993, merge=0/0, ticks=475/337, in_queue=812, util=89.30% 00:10:52.493 nvme0n4: ios=2203/2560, merge=0/0, ticks=411/394, in_queue=805, util=89.86% 00:10:52.493 16:16:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:52.493 [global] 00:10:52.493 thread=1 00:10:52.493 invalidate=1 00:10:52.493 rw=write 00:10:52.493 time_based=1 00:10:52.493 runtime=1 00:10:52.493 ioengine=libaio 00:10:52.493 direct=1 00:10:52.493 bs=4096 00:10:52.493 iodepth=128 00:10:52.493 norandommap=0 00:10:52.493 numjobs=1 00:10:52.493 00:10:52.493 verify_dump=1 00:10:52.493 verify_backlog=512 00:10:52.493 verify_state_save=0 00:10:52.493 do_verify=1 00:10:52.493 verify=crc32c-intel 00:10:52.493 [job0] 00:10:52.493 filename=/dev/nvme0n1 00:10:52.493 [job1] 00:10:52.493 filename=/dev/nvme0n2 00:10:52.493 [job2] 00:10:52.493 filename=/dev/nvme0n3 00:10:52.493 [job3] 00:10:52.493 filename=/dev/nvme0n4 00:10:52.493 Could not set queue depth (nvme0n1) 00:10:52.493 Could not set queue depth (nvme0n2) 00:10:52.493 Could not set queue depth (nvme0n3) 00:10:52.493 Could not set queue depth (nvme0n4) 00:10:52.493 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.493 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.493 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.493 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.493 fio-3.35 00:10:52.493 Starting 4 threads 00:10:53.869 00:10:53.869 job0: (groupid=0, jobs=1): err= 0: pid=78278: Tue Nov 26 16:16:19 2024 00:10:53.869 read: IOPS=5326, BW=20.8MiB/s (21.8MB/s)(20.9MiB/1003msec) 00:10:53.869 slat (usec): min=4, max=3689, avg=88.77, stdev=347.69 00:10:53.869 clat (usec): min=562, max=15476, avg=11661.95, stdev=1245.99 00:10:53.869 lat (usec): min=2688, max=15511, avg=11750.72, stdev=1273.71 00:10:53.869 clat percentiles (usec): 00:10:53.869 | 1.00th=[ 6390], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11338], 00:10:53.869 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:10:53.869 | 70.00th=[11863], 80.00th=[12125], 90.00th=[13042], 95.00th=[13566], 00:10:53.869 | 99.00th=[14091], 99.50th=[14353], 99.90th=[15139], 99.95th=[15270], 00:10:53.869 | 99.99th=[15533] 00:10:53.869 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:10:53.869 slat (usec): min=10, max=3275, avg=86.08, stdev=375.14 00:10:53.869 clat (usec): min=8368, max=15219, avg=11435.39, stdev=991.90 00:10:53.869 lat (usec): min=8389, max=15235, avg=11521.47, stdev=1048.67 00:10:53.869 clat percentiles (usec): 00:10:53.869 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[10552], 20.00th=[10683], 00:10:53.869 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:10:53.869 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12780], 95.00th=[13698], 00:10:53.869 | 99.00th=[14484], 99.50th=[14877], 99.90th=[15139], 99.95th=[15270], 00:10:53.869 | 99.99th=[15270] 00:10:53.869 bw ( KiB/s): min=22184, max=22872, per=35.57%, avg=22528.00, stdev=486.49, samples=2 00:10:53.869 iops : min= 5546, max= 5718, avg=5632.00, stdev=121.62, samples=2 00:10:53.869 lat (usec) : 750=0.01% 00:10:53.869 lat (msec) : 4=0.29%, 10=3.66%, 20=96.04% 00:10:53.869 cpu : usr=4.99%, sys=14.87%, ctx=509, majf=0, minf=1 00:10:53.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:53.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.869 issued rwts: total=5342,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.869 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.869 job1: (groupid=0, jobs=1): err= 0: pid=78279: Tue Nov 26 16:16:19 2024 00:10:53.869 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:10:53.869 slat (usec): min=6, max=6805, avg=192.79, stdev=985.12 00:10:53.869 clat (usec): min=12129, max=28266, avg=25109.60, stdev=2050.29 00:10:53.869 lat (usec): min=12143, max=28290, avg=25302.39, stdev=1817.44 00:10:53.869 clat percentiles (usec): 00:10:53.869 | 1.00th=[12649], 5.00th=[20579], 10.00th=[24249], 20.00th=[24773], 00:10:53.869 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25822], 00:10:53.869 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26608], 95.00th=[26870], 00:10:53.869 | 99.00th=[28181], 99.50th=[28181], 99.90th=[28181], 99.95th=[28181], 00:10:53.870 | 99.99th=[28181] 00:10:53.870 write: IOPS=2580, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1005msec); 0 zone resets 00:10:53.870 slat (usec): min=13, max=7412, avg=187.62, stdev=901.90 00:10:53.870 clat (usec): min=351, max=26871, avg=23831.21, stdev=2352.99 00:10:53.870 lat (usec): min=4993, max=26897, avg=24018.84, stdev=2162.10 00:10:53.870 clat percentiles (usec): 00:10:53.870 | 1.00th=[ 5735], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:10:53.870 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:10:53.870 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:10:53.870 | 99.00th=[26870], 99.50th=[26870], 99.90th=[26870], 99.95th=[26870], 00:10:53.870 | 99.99th=[26870] 00:10:53.870 bw ( KiB/s): min= 9208, max=11272, per=16.17%, avg=10240.00, stdev=1459.47, samples=2 00:10:53.870 iops : min= 2302, max= 2818, avg=2560.00, stdev=364.87, samples=2 00:10:53.870 lat (usec) : 500=0.02% 00:10:53.870 lat (msec) : 10=0.62%, 20=3.63%, 50=95.73% 00:10:53.870 cpu : usr=3.59%, sys=7.27%, ctx=163, majf=0, minf=8 00:10:53.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:53.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.870 issued rwts: total=2560,2593,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.870 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.870 job2: (groupid=0, jobs=1): err= 0: pid=78280: Tue Nov 26 16:16:19 2024 00:10:53.870 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:10:53.870 slat (usec): min=6, max=8467, avg=197.56, stdev=1020.04 00:10:53.870 clat (usec): min=15732, max=29113, avg=24933.93, stdev=2029.50 00:10:53.870 lat (usec): min=15746, max=29127, avg=25131.49, stdev=1802.77 00:10:53.870 clat percentiles (usec): 00:10:53.870 | 1.00th=[16188], 5.00th=[21890], 10.00th=[22676], 20.00th=[23987], 00:10:53.870 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25560], 00:10:53.870 | 70.00th=[25822], 80.00th=[26346], 90.00th=[26870], 95.00th=[27657], 00:10:53.870 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:10:53.870 | 99.99th=[29230] 00:10:53.870 write: IOPS=2611, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1005msec); 0 zone resets 00:10:53.870 slat (usec): min=8, max=6106, avg=181.08, stdev=870.59 00:10:53.870 clat (usec): min=340, max=27812, avg=23945.38, stdev=3142.09 00:10:53.870 lat (usec): min=4992, max=27828, avg=24126.47, stdev=3007.50 00:10:53.870 clat percentiles (usec): 00:10:53.870 | 1.00th=[ 5800], 5.00th=[19792], 10.00th=[21627], 20.00th=[23725], 00:10:53.870 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:10:53.870 | 70.00th=[24773], 80.00th=[25035], 90.00th=[26608], 95.00th=[27132], 00:10:53.870 | 99.00th=[27657], 99.50th=[27657], 99.90th=[27919], 99.95th=[27919], 00:10:53.870 | 99.99th=[27919] 00:10:53.870 bw ( KiB/s): min= 8952, max=11504, per=16.15%, avg=10228.00, stdev=1804.54, samples=2 00:10:53.870 iops : min= 2238, max= 2876, avg=2557.00, stdev=451.13, samples=2 00:10:53.870 lat (usec) : 500=0.02% 00:10:53.870 lat (msec) : 10=1.23%, 20=2.89%, 50=95.85% 00:10:53.870 cpu : usr=2.29%, sys=8.47%, ctx=163, majf=0, minf=1 00:10:53.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:53.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.870 issued rwts: total=2560,2625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.870 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.870 job3: (groupid=0, jobs=1): err= 0: pid=78281: Tue Nov 26 16:16:19 2024 00:10:53.870 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:10:53.870 slat (usec): min=4, max=3266, avg=100.27, stdev=475.67 00:10:53.870 clat (usec): min=10055, max=14561, avg=13430.28, stdev=629.77 00:10:53.870 lat (usec): min=12376, max=14583, avg=13530.55, stdev=420.16 00:10:53.870 clat percentiles (usec): 00:10:53.870 | 1.00th=[10683], 5.00th=[12649], 10.00th=[12911], 20.00th=[13173], 00:10:53.870 | 30.00th=[13304], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:10:53.870 | 70.00th=[13698], 80.00th=[13829], 90.00th=[14091], 95.00th=[14222], 00:10:53.870 | 99.00th=[14353], 99.50th=[14484], 99.90th=[14484], 99.95th=[14484], 00:10:53.870 | 99.99th=[14615] 00:10:53.870 write: IOPS=5055, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1001msec); 0 zone resets 00:10:53.870 slat (usec): min=12, max=4574, avg=98.60, stdev=423.78 00:10:53.870 clat (usec): min=456, max=15272, avg=12742.80, stdev=1218.24 00:10:53.870 lat (usec): min=478, max=15291, avg=12841.40, stdev=1143.91 00:10:53.870 clat percentiles (usec): 00:10:53.870 | 1.00th=[ 6194], 5.00th=[11731], 10.00th=[12256], 20.00th=[12518], 00:10:53.870 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:10:53.870 | 70.00th=[13173], 80.00th=[13173], 90.00th=[13435], 95.00th=[13566], 00:10:53.870 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15270], 99.95th=[15270], 00:10:53.870 | 99.99th=[15270] 00:10:53.870 bw ( KiB/s): min=18992, max=20480, per=31.17%, avg=19736.00, stdev=1052.17, samples=2 00:10:53.870 iops : min= 4748, max= 5120, avg=4934.00, stdev=263.04, samples=2 00:10:53.870 lat (usec) : 500=0.02%, 750=0.03% 00:10:53.870 lat (msec) : 4=0.33%, 10=0.83%, 20=98.79% 00:10:53.870 cpu : usr=4.70%, sys=13.60%, ctx=303, majf=0, minf=2 00:10:53.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:53.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.870 issued rwts: total=4608,5061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.870 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.870 00:10:53.870 Run status group 0 (all jobs): 00:10:53.870 READ: bw=58.6MiB/s (61.4MB/s), 9.95MiB/s-20.8MiB/s (10.4MB/s-21.8MB/s), io=58.9MiB (61.7MB), run=1001-1005msec 00:10:53.870 WRITE: bw=61.8MiB/s (64.8MB/s), 10.1MiB/s-21.9MiB/s (10.6MB/s-23.0MB/s), io=62.2MiB (65.2MB), run=1001-1005msec 00:10:53.870 00:10:53.870 Disk stats (read/write): 00:10:53.870 nvme0n1: ios=4658/4778, merge=0/0, ticks=16848/15378, in_queue=32226, util=88.25% 00:10:53.870 nvme0n2: ios=2090/2336, merge=0/0, ticks=12409/12809, in_queue=25218, util=88.04% 00:10:53.870 nvme0n3: ios=2048/2368, merge=0/0, ticks=12750/12519, in_queue=25269, util=89.20% 00:10:53.870 nvme0n4: ios=4096/4160, merge=0/0, ticks=12312/11582, in_queue=23894, util=89.75% 00:10:53.870 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:53.870 [global] 00:10:53.870 thread=1 00:10:53.870 invalidate=1 00:10:53.870 rw=randwrite 00:10:53.870 time_based=1 00:10:53.870 runtime=1 00:10:53.870 ioengine=libaio 00:10:53.870 direct=1 00:10:53.870 bs=4096 00:10:53.870 iodepth=128 00:10:53.870 norandommap=0 00:10:53.870 numjobs=1 00:10:53.870 00:10:53.870 verify_dump=1 00:10:53.870 verify_backlog=512 00:10:53.870 verify_state_save=0 00:10:53.870 do_verify=1 00:10:53.870 verify=crc32c-intel 00:10:53.870 [job0] 00:10:53.870 filename=/dev/nvme0n1 00:10:53.870 [job1] 00:10:53.870 filename=/dev/nvme0n2 00:10:53.870 [job2] 00:10:53.870 filename=/dev/nvme0n3 00:10:53.870 [job3] 00:10:53.870 filename=/dev/nvme0n4 00:10:53.870 Could not set queue depth (nvme0n1) 00:10:53.870 Could not set queue depth (nvme0n2) 00:10:53.870 Could not set queue depth (nvme0n3) 00:10:53.870 Could not set queue depth (nvme0n4) 00:10:53.870 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:53.870 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:53.870 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:53.870 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:53.870 fio-3.35 00:10:53.870 Starting 4 threads 00:10:55.248 00:10:55.248 job0: (groupid=0, jobs=1): err= 0: pid=78345: Tue Nov 26 16:16:20 2024 00:10:55.248 read: IOPS=5556, BW=21.7MiB/s (22.8MB/s)(21.8MiB/1004msec) 00:10:55.248 slat (usec): min=7, max=6957, avg=85.90, stdev=541.04 00:10:55.248 clat (usec): min=1867, max=18842, avg=11918.88, stdev=1452.28 00:10:55.248 lat (usec): min=5561, max=22169, avg=12004.78, stdev=1474.03 00:10:55.248 clat percentiles (usec): 00:10:55.248 | 1.00th=[ 6652], 5.00th=[ 8586], 10.00th=[10945], 20.00th=[11469], 00:10:55.248 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:10:55.248 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12780], 95.00th=[13698], 00:10:55.248 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18744], 99.95th=[18744], 00:10:55.248 | 99.99th=[18744] 00:10:55.248 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:10:55.248 slat (usec): min=10, max=7519, avg=85.35, stdev=491.28 00:10:55.248 clat (usec): min=5633, max=14632, avg=10770.45, stdev=1000.36 00:10:55.248 lat (usec): min=7555, max=14698, avg=10855.79, stdev=900.37 00:10:55.248 clat percentiles (usec): 00:10:55.248 | 1.00th=[ 7111], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:10:55.248 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:10:55.248 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11600], 95.00th=[11994], 00:10:55.248 | 99.00th=[14353], 99.50th=[14484], 99.90th=[14615], 99.95th=[14615], 00:10:55.248 | 99.99th=[14615] 00:10:55.248 bw ( KiB/s): min=21000, max=24007, per=34.59%, avg=22503.50, stdev=2126.27, samples=2 00:10:55.248 iops : min= 5250, max= 6001, avg=5625.50, stdev=531.04, samples=2 00:10:55.248 lat (msec) : 2=0.01%, 10=10.93%, 20=89.06% 00:10:55.248 cpu : usr=5.08%, sys=14.06%, ctx=264, majf=0, minf=13 00:10:55.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:55.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.248 issued rwts: total=5579,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.248 job1: (groupid=0, jobs=1): err= 0: pid=78346: Tue Nov 26 16:16:20 2024 00:10:55.248 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:10:55.248 slat (usec): min=7, max=5950, avg=84.84, stdev=522.60 00:10:55.248 clat (usec): min=3006, max=19034, avg=11859.39, stdev=1348.41 00:10:55.248 lat (usec): min=3032, max=22571, avg=11944.23, stdev=1370.04 00:10:55.248 clat percentiles (usec): 00:10:55.248 | 1.00th=[ 7242], 5.00th=[10421], 10.00th=[11076], 20.00th=[11469], 00:10:55.248 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:10:55.248 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12649], 95.00th=[12911], 00:10:55.248 | 99.00th=[17957], 99.50th=[18220], 99.90th=[19006], 99.95th=[19006], 00:10:55.248 | 99.99th=[19006] 00:10:55.249 write: IOPS=5623, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:10:55.249 slat (usec): min=9, max=7844, avg=85.26, stdev=494.69 00:10:55.249 clat (usec): min=1847, max=15086, avg=10679.28, stdev=995.76 00:10:55.249 lat (usec): min=1868, max=15110, avg=10764.54, stdev=891.55 00:10:55.249 clat percentiles (usec): 00:10:55.249 | 1.00th=[ 6980], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[10159], 00:10:55.249 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:10:55.249 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11338], 95.00th=[11600], 00:10:55.249 | 99.00th=[15008], 99.50th=[15008], 99.90th=[15139], 99.95th=[15139], 00:10:55.249 | 99.99th=[15139] 00:10:55.249 bw ( KiB/s): min=21008, max=24007, per=34.60%, avg=22507.50, stdev=2120.61, samples=2 00:10:55.249 iops : min= 5252, max= 6001, avg=5626.50, stdev=529.62, samples=2 00:10:55.249 lat (msec) : 2=0.05%, 4=0.09%, 10=9.36%, 20=90.50% 00:10:55.249 cpu : usr=4.69%, sys=14.37%, ctx=240, majf=0, minf=13 00:10:55.249 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:55.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.249 issued rwts: total=5632,5640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.249 job2: (groupid=0, jobs=1): err= 0: pid=78347: Tue Nov 26 16:16:20 2024 00:10:55.249 read: IOPS=2360, BW=9443KiB/s (9670kB/s)(9500KiB/1006msec) 00:10:55.249 slat (usec): min=11, max=14491, avg=187.34, stdev=1224.88 00:10:55.249 clat (usec): min=2356, max=46858, avg=26230.45, stdev=4120.01 00:10:55.249 lat (usec): min=11491, max=53021, avg=26417.79, stdev=4072.04 00:10:55.249 clat percentiles (usec): 00:10:55.249 | 1.00th=[11994], 5.00th=[17433], 10.00th=[20841], 20.00th=[25560], 00:10:55.249 | 30.00th=[26346], 40.00th=[26608], 50.00th=[26870], 60.00th=[27132], 00:10:55.249 | 70.00th=[27395], 80.00th=[27657], 90.00th=[28443], 95.00th=[29230], 00:10:55.249 | 99.00th=[46400], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:10:55.249 | 99.99th=[46924] 00:10:55.249 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:10:55.249 slat (usec): min=7, max=27692, avg=209.65, stdev=1437.89 00:10:55.249 clat (usec): min=12023, max=42147, avg=25459.63, stdev=3741.42 00:10:55.249 lat (usec): min=19905, max=42173, avg=25669.29, stdev=3533.45 00:10:55.249 clat percentiles (usec): 00:10:55.249 | 1.00th=[14615], 5.00th=[21365], 10.00th=[22676], 20.00th=[23462], 00:10:55.249 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[25297], 00:10:55.249 | 70.00th=[25560], 80.00th=[25822], 90.00th=[30278], 95.00th=[32900], 00:10:55.249 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:10:55.249 | 99.99th=[42206] 00:10:55.249 bw ( KiB/s): min= 9736, max=10722, per=15.73%, avg=10229.00, stdev=697.21, samples=2 00:10:55.249 iops : min= 2434, max= 2680, avg=2557.00, stdev=173.95, samples=2 00:10:55.249 lat (msec) : 4=0.02%, 20=4.78%, 50=95.20% 00:10:55.249 cpu : usr=2.69%, sys=7.26%, ctx=103, majf=0, minf=13 00:10:55.249 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:10:55.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.249 issued rwts: total=2375,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.249 job3: (groupid=0, jobs=1): err= 0: pid=78348: Tue Nov 26 16:16:20 2024 00:10:55.249 read: IOPS=2407, BW=9631KiB/s (9862kB/s)(9708KiB/1008msec) 00:10:55.249 slat (usec): min=7, max=19553, avg=207.00, stdev=1582.91 00:10:55.249 clat (usec): min=1578, max=44339, avg=27026.78, stdev=3523.90 00:10:55.249 lat (usec): min=13368, max=50338, avg=27233.78, stdev=3746.00 00:10:55.249 clat percentiles (usec): 00:10:55.249 | 1.00th=[13829], 5.00th=[21103], 10.00th=[24249], 20.00th=[25822], 00:10:55.249 | 30.00th=[26346], 40.00th=[26870], 50.00th=[27132], 60.00th=[27395], 00:10:55.249 | 70.00th=[27657], 80.00th=[27657], 90.00th=[32113], 95.00th=[32900], 00:10:55.249 | 99.00th=[35390], 99.50th=[40109], 99.90th=[43779], 99.95th=[44303], 00:10:55.249 | 99.99th=[44303] 00:10:55.249 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:10:55.249 slat (usec): min=4, max=16485, avg=188.34, stdev=1295.07 00:10:55.249 clat (usec): min=11731, max=33082, avg=24298.94, stdev=3860.37 00:10:55.249 lat (usec): min=11755, max=33112, avg=24487.28, stdev=3684.22 00:10:55.249 clat percentiles (usec): 00:10:55.249 | 1.00th=[12256], 5.00th=[18220], 10.00th=[19268], 20.00th=[22938], 00:10:55.249 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:10:55.249 | 70.00th=[25560], 80.00th=[25822], 90.00th=[29230], 95.00th=[31065], 00:10:55.249 | 99.00th=[32900], 99.50th=[32900], 99.90th=[32900], 99.95th=[32900], 00:10:55.249 | 99.99th=[33162] 00:10:55.249 bw ( KiB/s): min= 9224, max=11233, per=15.72%, avg=10228.50, stdev=1420.58, samples=2 00:10:55.249 iops : min= 2306, max= 2808, avg=2557.00, stdev=354.97, samples=2 00:10:55.249 lat (msec) : 2=0.02%, 20=8.22%, 50=91.76% 00:10:55.249 cpu : usr=2.68%, sys=7.15%, ctx=107, majf=0, minf=11 00:10:55.249 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:10:55.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.249 issued rwts: total=2427,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.249 00:10:55.249 Run status group 0 (all jobs): 00:10:55.249 READ: bw=62.1MiB/s (65.1MB/s), 9443KiB/s-21.9MiB/s (9670kB/s-23.0MB/s), io=62.6MiB (65.6MB), run=1003-1008msec 00:10:55.249 WRITE: bw=63.5MiB/s (66.6MB/s), 9.92MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=64.0MiB (67.1MB), run=1003-1008msec 00:10:55.249 00:10:55.249 Disk stats (read/write): 00:10:55.249 nvme0n1: ios=4658/4992, merge=0/0, ticks=52314/49172, in_queue=101486, util=89.07% 00:10:55.249 nvme0n2: ios=4656/5056, merge=0/0, ticks=51480/49392, in_queue=100872, util=89.07% 00:10:55.249 nvme0n3: ios=2040/2056, merge=0/0, ticks=52554/50368, in_queue=102922, util=89.18% 00:10:55.249 nvme0n4: ios=2040/2120, merge=0/0, ticks=54494/48897, in_queue=103391, util=89.73% 00:10:55.249 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:55.249 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=78361 00:10:55.249 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:55.249 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:55.249 [global] 00:10:55.249 thread=1 00:10:55.249 invalidate=1 00:10:55.249 rw=read 00:10:55.249 time_based=1 00:10:55.249 runtime=10 00:10:55.249 ioengine=libaio 00:10:55.249 direct=1 00:10:55.249 bs=4096 00:10:55.249 iodepth=1 00:10:55.249 norandommap=1 00:10:55.249 numjobs=1 00:10:55.249 00:10:55.249 [job0] 00:10:55.249 filename=/dev/nvme0n1 00:10:55.249 [job1] 00:10:55.249 filename=/dev/nvme0n2 00:10:55.249 [job2] 00:10:55.249 filename=/dev/nvme0n3 00:10:55.249 [job3] 00:10:55.249 filename=/dev/nvme0n4 00:10:55.249 Could not set queue depth (nvme0n1) 00:10:55.249 Could not set queue depth (nvme0n2) 00:10:55.249 Could not set queue depth (nvme0n3) 00:10:55.249 Could not set queue depth (nvme0n4) 00:10:55.249 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.249 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.249 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.249 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:55.249 fio-3.35 00:10:55.249 Starting 4 threads 00:10:58.534 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:58.534 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=33734656, buflen=4096 00:10:58.534 fio: pid=78404, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:58.534 16:16:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:58.534 fio: pid=78403, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:58.534 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=38436864, buflen=4096 00:10:58.792 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.792 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:59.051 fio: pid=78401, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:59.051 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=47157248, buflen=4096 00:10:59.051 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:59.051 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:59.309 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=19435520, buflen=4096 00:10:59.309 fio: pid=78402, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:59.309 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:59.309 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:59.309 00:10:59.309 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78401: Tue Nov 26 16:16:24 2024 00:10:59.309 read: IOPS=3235, BW=12.6MiB/s (13.2MB/s)(45.0MiB/3559msec) 00:10:59.309 slat (usec): min=7, max=11225, avg=15.47, stdev=173.27 00:10:59.309 clat (usec): min=119, max=5785, avg=292.33, stdev=124.88 00:10:59.309 lat (usec): min=131, max=11531, avg=307.81, stdev=212.34 00:10:59.309 clat percentiles (usec): 00:10:59.309 | 1.00th=[ 143], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 186], 00:10:59.309 | 30.00th=[ 221], 40.00th=[ 293], 50.00th=[ 334], 60.00th=[ 343], 00:10:59.309 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 383], 00:10:59.309 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 1074], 99.95th=[ 2540], 00:10:59.309 | 99.99th=[ 4621] 00:10:59.309 bw ( KiB/s): min=10888, max=19408, per=23.67%, avg=12420.00, stdev=3424.39, samples=6 00:10:59.309 iops : min= 2722, max= 4852, avg=3105.00, stdev=856.10, samples=6 00:10:59.309 lat (usec) : 250=33.79%, 500=66.04%, 750=0.03%, 1000=0.01% 00:10:59.309 lat (msec) : 2=0.06%, 4=0.03%, 10=0.03% 00:10:59.309 cpu : usr=0.98%, sys=3.77%, ctx=11520, majf=0, minf=1 00:10:59.309 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.309 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.309 issued rwts: total=11514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.309 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.309 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78402: Tue Nov 26 16:16:24 2024 00:10:59.309 read: IOPS=5514, BW=21.5MiB/s (22.6MB/s)(82.5MiB/3832msec) 00:10:59.309 slat (usec): min=8, max=12809, avg=14.55, stdev=156.72 00:10:59.309 clat (usec): min=119, max=16484, avg=165.66, stdev=127.14 00:10:59.309 lat (usec): min=132, max=16505, avg=180.21, stdev=202.95 00:10:59.309 clat percentiles (usec): 00:10:59.309 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:10:59.309 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:10:59.309 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 194], 95.00th=[ 223], 00:10:59.309 | 99.00th=[ 293], 99.50th=[ 314], 99.90th=[ 545], 99.95th=[ 1500], 00:10:59.309 | 99.99th=[ 3785] 00:10:59.309 bw ( KiB/s): min=15125, max=23776, per=42.28%, avg=22182.43, stdev=3140.92, samples=7 00:10:59.309 iops : min= 3781, max= 5944, avg=5545.57, stdev=785.32, samples=7 00:10:59.309 lat (usec) : 250=97.05%, 500=2.83%, 750=0.05%, 1000=0.01% 00:10:59.309 lat (msec) : 2=0.03%, 4=0.02%, 10=0.01%, 20=0.01% 00:10:59.309 cpu : usr=1.54%, sys=5.98%, ctx=21152, majf=0, minf=1 00:10:59.309 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.309 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.310 issued rwts: total=21130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.310 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.310 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78403: Tue Nov 26 16:16:24 2024 00:10:59.310 read: IOPS=2903, BW=11.3MiB/s (11.9MB/s)(36.7MiB/3232msec) 00:10:59.310 slat (usec): min=9, max=7759, avg=18.42, stdev=102.38 00:10:59.310 clat (usec): min=145, max=2194, avg=324.40, stdev=49.20 00:10:59.310 lat (usec): min=158, max=8016, avg=342.82, stdev=113.04 00:10:59.310 clat percentiles (usec): 00:10:59.310 | 1.00th=[ 186], 5.00th=[ 251], 10.00th=[ 265], 20.00th=[ 285], 00:10:59.310 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 343], 00:10:59.310 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 379], 00:10:59.310 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 453], 99.95th=[ 734], 00:10:59.310 | 99.99th=[ 2180] 00:10:59.310 bw ( KiB/s): min=10992, max=13232, per=21.98%, avg=11532.00, stdev=877.39, samples=6 00:10:59.310 iops : min= 2748, max= 3308, avg=2883.00, stdev=219.35, samples=6 00:10:59.310 lat (usec) : 250=4.95%, 500=94.97%, 750=0.02%, 1000=0.01% 00:10:59.310 lat (msec) : 2=0.02%, 4=0.01% 00:10:59.310 cpu : usr=1.21%, sys=4.77%, ctx=9392, majf=0, minf=2 00:10:59.310 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.310 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.310 issued rwts: total=9385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.310 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.310 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78404: Tue Nov 26 16:16:24 2024 00:10:59.310 read: IOPS=2773, BW=10.8MiB/s (11.4MB/s)(32.2MiB/2970msec) 00:10:59.310 slat (nsec): min=10703, max=85601, avg=19510.44, stdev=6240.59 00:10:59.310 clat (usec): min=138, max=2285, avg=338.84, stdev=58.46 00:10:59.310 lat (usec): min=149, max=2311, avg=358.35, stdev=60.70 00:10:59.310 clat percentiles (usec): 00:10:59.310 | 1.00th=[ 247], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 310], 00:10:59.310 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 343], 00:10:59.310 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 375], 95.00th=[ 400], 00:10:59.310 | 99.00th=[ 570], 99.50th=[ 586], 99.90th=[ 644], 99.95th=[ 701], 00:10:59.310 | 99.99th=[ 2278] 00:10:59.310 bw ( KiB/s): min=10456, max=13200, per=21.31%, avg=11180.80, stdev=1135.92, samples=5 00:10:59.310 iops : min= 2614, max= 3300, avg=2795.20, stdev=283.98, samples=5 00:10:59.310 lat (usec) : 250=1.32%, 500=95.58%, 750=3.05%, 1000=0.01% 00:10:59.310 lat (msec) : 2=0.01%, 4=0.01% 00:10:59.310 cpu : usr=1.21%, sys=4.88%, ctx=8238, majf=0, minf=2 00:10:59.310 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.310 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.310 issued rwts: total=8237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.310 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.310 00:10:59.310 Run status group 0 (all jobs): 00:10:59.310 READ: bw=51.2MiB/s (53.7MB/s), 10.8MiB/s-21.5MiB/s (11.4MB/s-22.6MB/s), io=196MiB (206MB), run=2970-3832msec 00:10:59.310 00:10:59.310 Disk stats (read/write): 00:10:59.310 nvme0n1: ios=10681/0, merge=0/0, ticks=2976/0, in_queue=2976, util=95.22% 00:10:59.310 nvme0n2: ios=19837/0, merge=0/0, ticks=3323/0, in_queue=3323, util=95.45% 00:10:59.310 nvme0n3: ios=9000/0, merge=0/0, ticks=2940/0, in_queue=2940, util=96.43% 00:10:59.310 nvme0n4: ios=7972/0, merge=0/0, ticks=2736/0, in_queue=2736, util=96.76% 00:10:59.568 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:59.568 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:59.826 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:59.826 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:00.392 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:00.392 16:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:00.392 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:00.392 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:00.651 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:00.651 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 78361 00:11:00.651 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:00.651 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:00.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.651 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:00.651 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:00.651 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.651 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:00.909 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.909 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:00.909 nvmf hotplug test: fio failed as expected 00:11:00.909 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:00.909 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:00.909 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:00.909 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:01.168 rmmod nvme_tcp 00:11:01.168 rmmod nvme_fabrics 00:11:01.168 rmmod nvme_keyring 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 77982 ']' 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 77982 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 77982 ']' 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 77982 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77982 00:11:01.168 killing process with pid 77982 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77982' 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 77982 00:11:01.168 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 77982 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:01.427 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:01.428 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:01.428 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:01.428 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:01.428 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:01.428 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:01.428 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:01.428 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.428 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.428 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:01.687 00:11:01.687 real 0m19.688s 00:11:01.687 user 1m14.811s 00:11:01.687 sys 0m9.852s 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.687 ************************************ 00:11:01.687 END TEST nvmf_fio_target 00:11:01.687 ************************************ 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:01.687 ************************************ 00:11:01.687 START TEST nvmf_bdevio 00:11:01.687 ************************************ 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:01.687 * Looking for test storage... 00:11:01.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.687 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:01.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.687 --rc genhtml_branch_coverage=1 00:11:01.687 --rc genhtml_function_coverage=1 00:11:01.687 --rc genhtml_legend=1 00:11:01.687 --rc geninfo_all_blocks=1 00:11:01.687 --rc geninfo_unexecuted_blocks=1 00:11:01.687 00:11:01.687 ' 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:01.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.947 --rc genhtml_branch_coverage=1 00:11:01.947 --rc genhtml_function_coverage=1 00:11:01.947 --rc genhtml_legend=1 00:11:01.947 --rc geninfo_all_blocks=1 00:11:01.947 --rc geninfo_unexecuted_blocks=1 00:11:01.947 00:11:01.947 ' 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:01.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.947 --rc genhtml_branch_coverage=1 00:11:01.947 --rc genhtml_function_coverage=1 00:11:01.947 --rc genhtml_legend=1 00:11:01.947 --rc geninfo_all_blocks=1 00:11:01.947 --rc geninfo_unexecuted_blocks=1 00:11:01.947 00:11:01.947 ' 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:01.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.947 --rc genhtml_branch_coverage=1 00:11:01.947 --rc genhtml_function_coverage=1 00:11:01.947 --rc genhtml_legend=1 00:11:01.947 --rc geninfo_all_blocks=1 00:11:01.947 --rc geninfo_unexecuted_blocks=1 00:11:01.947 00:11:01.947 ' 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.947 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.947 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:01.948 Cannot find device "nvmf_init_br" 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:01.948 Cannot find device "nvmf_init_br2" 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:01.948 Cannot find device "nvmf_tgt_br" 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:01.948 Cannot find device "nvmf_tgt_br2" 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:01.948 Cannot find device "nvmf_init_br" 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:01.948 Cannot find device "nvmf_init_br2" 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:01.948 Cannot find device "nvmf_tgt_br" 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:01.948 Cannot find device "nvmf_tgt_br2" 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:01.948 Cannot find device "nvmf_br" 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:01.948 Cannot find device "nvmf_init_if" 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:01.948 Cannot find device "nvmf_init_if2" 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:01.948 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:01.948 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:01.948 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:02.207 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:02.208 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:02.208 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:11:02.208 00:11:02.208 --- 10.0.0.3 ping statistics --- 00:11:02.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.208 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:02.208 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:02.208 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:11:02.208 00:11:02.208 --- 10.0.0.4 ping statistics --- 00:11:02.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.208 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:02.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:02.208 00:11:02.208 --- 10.0.0.1 ping statistics --- 00:11:02.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.208 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:02.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:11:02.208 00:11:02.208 --- 10.0.0.2 ping statistics --- 00:11:02.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.208 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=78727 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 78727 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 78727 ']' 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.208 16:16:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.208 [2024-11-26 16:16:27.813048] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:11:02.208 [2024-11-26 16:16:27.813190] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.467 [2024-11-26 16:16:27.965069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.467 [2024-11-26 16:16:27.992196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.467 [2024-11-26 16:16:27.992265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.467 [2024-11-26 16:16:27.992280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.467 [2024-11-26 16:16:27.992291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.467 [2024-11-26 16:16:27.992299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.467 [2024-11-26 16:16:27.993599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:02.467 [2024-11-26 16:16:27.993721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:02.467 [2024-11-26 16:16:27.993868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:02.467 [2024-11-26 16:16:27.993870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.467 [2024-11-26 16:16:28.029331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:02.467 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.467 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:02.467 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:02.468 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.468 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.726 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.726 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:02.726 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.727 [2024-11-26 16:16:28.141586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.727 Malloc0 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.727 [2024-11-26 16:16:28.199987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:02.727 { 00:11:02.727 "params": { 00:11:02.727 "name": "Nvme$subsystem", 00:11:02.727 "trtype": "$TEST_TRANSPORT", 00:11:02.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:02.727 "adrfam": "ipv4", 00:11:02.727 "trsvcid": "$NVMF_PORT", 00:11:02.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:02.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:02.727 "hdgst": ${hdgst:-false}, 00:11:02.727 "ddgst": ${ddgst:-false} 00:11:02.727 }, 00:11:02.727 "method": "bdev_nvme_attach_controller" 00:11:02.727 } 00:11:02.727 EOF 00:11:02.727 )") 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:02.727 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:02.727 "params": { 00:11:02.727 "name": "Nvme1", 00:11:02.727 "trtype": "tcp", 00:11:02.727 "traddr": "10.0.0.3", 00:11:02.727 "adrfam": "ipv4", 00:11:02.727 "trsvcid": "4420", 00:11:02.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:02.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:02.727 "hdgst": false, 00:11:02.727 "ddgst": false 00:11:02.727 }, 00:11:02.727 "method": "bdev_nvme_attach_controller" 00:11:02.727 }' 00:11:02.727 [2024-11-26 16:16:28.257062] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:11:02.727 [2024-11-26 16:16:28.257162] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78755 ] 00:11:03.066 [2024-11-26 16:16:28.413636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:03.066 [2024-11-26 16:16:28.442390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.066 [2024-11-26 16:16:28.442502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.066 [2024-11-26 16:16:28.442506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.066 [2024-11-26 16:16:28.485495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:03.066 I/O targets: 00:11:03.066 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:03.066 00:11:03.066 00:11:03.066 CUnit - A unit testing framework for C - Version 2.1-3 00:11:03.066 http://cunit.sourceforge.net/ 00:11:03.066 00:11:03.066 00:11:03.066 Suite: bdevio tests on: Nvme1n1 00:11:03.066 Test: blockdev write read block ...passed 00:11:03.066 Test: blockdev write zeroes read block ...passed 00:11:03.066 Test: blockdev write zeroes read no split ...passed 00:11:03.066 Test: blockdev write zeroes read split ...passed 00:11:03.066 Test: blockdev write zeroes read split partial ...passed 00:11:03.066 Test: blockdev reset ...[2024-11-26 16:16:28.612636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:03.066 [2024-11-26 16:16:28.612743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f8d50 (9): Bad file descriptor 00:11:03.066 passed 00:11:03.066 Test: blockdev write read 8 blocks ...[2024-11-26 16:16:28.630513] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:03.066 passed 00:11:03.066 Test: blockdev write read size > 128k ...passed 00:11:03.066 Test: blockdev write read invalid size ...passed 00:11:03.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:03.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:03.066 Test: blockdev write read max offset ...passed 00:11:03.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:03.066 Test: blockdev writev readv 8 blocks ...passed 00:11:03.066 Test: blockdev writev readv 30 x 1block ...passed 00:11:03.066 Test: blockdev writev readv block ...passed 00:11:03.066 Test: blockdev writev readv size > 128k ...passed 00:11:03.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:03.066 Test: blockdev comparev and writev ...[2024-11-26 16:16:28.638182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.066 [2024-11-26 16:16:28.638236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:03.067 [2024-11-26 16:16:28.638262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.067 [2024-11-26 16:16:28.638275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:03.067 passed 00:11:03.067 Test: blockdev nvme passthru rw ...[2024-11-26 16:16:28.638674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.067 [2024-11-26 16:16:28.638714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:03.067 [2024-11-26 16:16:28.638735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.067 [2024-11-26 16:16:28.638748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:03.067 [2024-11-26 16:16:28.639026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.067 [2024-11-26 16:16:28.639045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:03.067 [2024-11-26 16:16:28.639066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.067 [2024-11-26 16:16:28.639078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:03.067 [2024-11-26 16:16:28.639397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.067 [2024-11-26 16:16:28.639418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:03.067 [2024-11-26 16:16:28.639438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:03.067 [2024-11-26 16:16:28.639451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:03.067 passed 00:11:03.067 Test: blockdev nvme passthru vendor specific ...[2024-11-26 16:16:28.640305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:03.067 [2024-11-26 16:16:28.640358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:03.067 [2024-11-26 16:16:28.640490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:03.067 [2024-11-26 16:16:28.640516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:03.067 [2024-11-26 16:16:28.640636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:03.067 [2024-11-26 16:16:28.640661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:03.067 [2024-11-26 16:16:28.640778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:03.067 passed 00:11:03.067 Test: blockdev nvme admin passthru ...[2024-11-26 16:16:28.640809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:03.067 passed 00:11:03.067 Test: blockdev copy ...passed 00:11:03.067 00:11:03.067 Run Summary: Type Total Ran Passed Failed Inactive 00:11:03.067 suites 1 1 n/a 0 0 00:11:03.067 tests 23 23 23 0 0 00:11:03.067 asserts 152 152 152 0 n/a 00:11:03.067 00:11:03.067 Elapsed time = 0.142 seconds 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:03.326 rmmod nvme_tcp 00:11:03.326 rmmod nvme_fabrics 00:11:03.326 rmmod nvme_keyring 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 78727 ']' 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 78727 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 78727 ']' 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 78727 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78727 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78727' 00:11:03.326 killing process with pid 78727 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 78727 00:11:03.326 16:16:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 78727 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:03.586 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:03.845 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:03.845 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.845 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:03.845 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:03.845 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.845 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.845 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.845 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:03.845 00:11:03.845 real 0m2.197s 00:11:03.845 user 0m5.539s 00:11:03.845 sys 0m0.740s 00:11:03.845 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.845 ************************************ 00:11:03.845 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:03.845 END TEST nvmf_bdevio 00:11:03.845 ************************************ 00:11:03.845 16:16:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:03.845 00:11:03.845 real 2m26.964s 00:11:03.845 user 6m22.852s 00:11:03.845 sys 0m52.347s 00:11:03.845 16:16:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.845 16:16:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.845 ************************************ 00:11:03.845 END TEST nvmf_target_core 00:11:03.845 ************************************ 00:11:03.845 16:16:29 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:03.845 16:16:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:03.845 16:16:29 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.845 16:16:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:03.845 ************************************ 00:11:03.845 START TEST nvmf_target_extra 00:11:03.845 ************************************ 00:11:03.845 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:04.105 * Looking for test storage... 00:11:04.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:04.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.105 --rc genhtml_branch_coverage=1 00:11:04.105 --rc genhtml_function_coverage=1 00:11:04.105 --rc genhtml_legend=1 00:11:04.105 --rc geninfo_all_blocks=1 00:11:04.105 --rc geninfo_unexecuted_blocks=1 00:11:04.105 00:11:04.105 ' 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:04.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.105 --rc genhtml_branch_coverage=1 00:11:04.105 --rc genhtml_function_coverage=1 00:11:04.105 --rc genhtml_legend=1 00:11:04.105 --rc geninfo_all_blocks=1 00:11:04.105 --rc geninfo_unexecuted_blocks=1 00:11:04.105 00:11:04.105 ' 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:04.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.105 --rc genhtml_branch_coverage=1 00:11:04.105 --rc genhtml_function_coverage=1 00:11:04.105 --rc genhtml_legend=1 00:11:04.105 --rc geninfo_all_blocks=1 00:11:04.105 --rc geninfo_unexecuted_blocks=1 00:11:04.105 00:11:04.105 ' 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:04.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.105 --rc genhtml_branch_coverage=1 00:11:04.105 --rc genhtml_function_coverage=1 00:11:04.105 --rc genhtml_legend=1 00:11:04.105 --rc geninfo_all_blocks=1 00:11:04.105 --rc geninfo_unexecuted_blocks=1 00:11:04.105 00:11:04.105 ' 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.105 16:16:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.106 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:04.106 ************************************ 00:11:04.106 START TEST nvmf_auth_target 00:11:04.106 ************************************ 00:11:04.106 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:04.106 * Looking for test storage... 00:11:04.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:04.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.366 --rc genhtml_branch_coverage=1 00:11:04.366 --rc genhtml_function_coverage=1 00:11:04.366 --rc genhtml_legend=1 00:11:04.366 --rc geninfo_all_blocks=1 00:11:04.366 --rc geninfo_unexecuted_blocks=1 00:11:04.366 00:11:04.366 ' 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:04.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.366 --rc genhtml_branch_coverage=1 00:11:04.366 --rc genhtml_function_coverage=1 00:11:04.366 --rc genhtml_legend=1 00:11:04.366 --rc geninfo_all_blocks=1 00:11:04.366 --rc geninfo_unexecuted_blocks=1 00:11:04.366 00:11:04.366 ' 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:04.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.366 --rc genhtml_branch_coverage=1 00:11:04.366 --rc genhtml_function_coverage=1 00:11:04.366 --rc genhtml_legend=1 00:11:04.366 --rc geninfo_all_blocks=1 00:11:04.366 --rc geninfo_unexecuted_blocks=1 00:11:04.366 00:11:04.366 ' 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:04.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.366 --rc genhtml_branch_coverage=1 00:11:04.366 --rc genhtml_function_coverage=1 00:11:04.366 --rc genhtml_legend=1 00:11:04.366 --rc geninfo_all_blocks=1 00:11:04.366 --rc geninfo_unexecuted_blocks=1 00:11:04.366 00:11:04.366 ' 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.366 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.367 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:04.367 Cannot find device "nvmf_init_br" 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:04.367 Cannot find device "nvmf_init_br2" 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:04.367 Cannot find device "nvmf_tgt_br" 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:04.367 Cannot find device "nvmf_tgt_br2" 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:04.367 Cannot find device "nvmf_init_br" 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:04.367 Cannot find device "nvmf_init_br2" 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:04.367 Cannot find device "nvmf_tgt_br" 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:04.367 Cannot find device "nvmf_tgt_br2" 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:04.367 Cannot find device "nvmf_br" 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:04.367 16:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:04.367 Cannot find device "nvmf_init_if" 00:11:04.367 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:04.367 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:04.626 Cannot find device "nvmf_init_if2" 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:04.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:04.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:04.626 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:04.626 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:11:04.626 00:11:04.626 --- 10.0.0.3 ping statistics --- 00:11:04.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.626 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:04.626 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:04.626 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:11:04.626 00:11:04.626 --- 10.0.0.4 ping statistics --- 00:11:04.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.626 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:04.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:11:04.626 00:11:04.626 --- 10.0.0.1 ping statistics --- 00:11:04.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.626 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:11:04.626 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:04.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:11:04.885 00:11:04.885 --- 10.0.0.2 ping statistics --- 00:11:04.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.885 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=79038 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 79038 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 79038 ']' 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.885 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=79063 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3dfcdd585445cd57fb0179e990d73ec6f1d49beb625fa0f1 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.efW 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3dfcdd585445cd57fb0179e990d73ec6f1d49beb625fa0f1 0 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3dfcdd585445cd57fb0179e990d73ec6f1d49beb625fa0f1 0 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3dfcdd585445cd57fb0179e990d73ec6f1d49beb625fa0f1 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.efW 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.efW 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.efW 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=10726291352784f403f284454438bfac8dd02f34a8e64419bbe12a54f3ba778a 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:05.144 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.rlF 00:11:05.145 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 10726291352784f403f284454438bfac8dd02f34a8e64419bbe12a54f3ba778a 3 00:11:05.145 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 10726291352784f403f284454438bfac8dd02f34a8e64419bbe12a54f3ba778a 3 00:11:05.145 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:05.145 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:05.145 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=10726291352784f403f284454438bfac8dd02f34a8e64419bbe12a54f3ba778a 00:11:05.145 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:05.145 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.rlF 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.rlF 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.rlF 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0a641b0b204d8027084ea2b945cda05b 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.98y 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0a641b0b204d8027084ea2b945cda05b 1 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0a641b0b204d8027084ea2b945cda05b 1 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0a641b0b204d8027084ea2b945cda05b 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.98y 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.98y 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.98y 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=352ed433a53bdf5c3371a04e3e0eb9679ee4293d65c6c18d 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lNT 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 352ed433a53bdf5c3371a04e3e0eb9679ee4293d65c6c18d 2 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 352ed433a53bdf5c3371a04e3e0eb9679ee4293d65c6c18d 2 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=352ed433a53bdf5c3371a04e3e0eb9679ee4293d65c6c18d 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lNT 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lNT 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.lNT 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3c8b989db65ecc9539b2dfae433b37c8c9e7b71ef7502594 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.DnI 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3c8b989db65ecc9539b2dfae433b37c8c9e7b71ef7502594 2 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3c8b989db65ecc9539b2dfae433b37c8c9e7b71ef7502594 2 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3c8b989db65ecc9539b2dfae433b37c8c9e7b71ef7502594 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:05.403 16:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.DnI 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.DnI 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.DnI 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6c4eff0b13af9e53bfe3b46c568689eb 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Ckm 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6c4eff0b13af9e53bfe3b46c568689eb 1 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6c4eff0b13af9e53bfe3b46c568689eb 1 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6c4eff0b13af9e53bfe3b46c568689eb 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:05.403 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Ckm 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Ckm 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Ckm 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f21fc880a3626094d939a02a6660c6df6c0e87fd5583789dad34f040336b5947 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Uu1 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f21fc880a3626094d939a02a6660c6df6c0e87fd5583789dad34f040336b5947 3 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f21fc880a3626094d939a02a6660c6df6c0e87fd5583789dad34f040336b5947 3 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f21fc880a3626094d939a02a6660c6df6c0e87fd5583789dad34f040336b5947 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Uu1 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Uu1 00:11:05.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Uu1 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 79038 00:11:05.661 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 79038 ']' 00:11:05.662 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.662 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.662 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.662 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.662 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:05.920 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.920 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:05.920 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 79063 /var/tmp/host.sock 00:11:05.920 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 79063 ']' 00:11:05.920 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:05.920 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.920 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:05.920 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.920 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.179 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.179 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:06.179 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:06.179 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.179 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.179 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.179 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:06.179 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.efW 00:11:06.179 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.179 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.179 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.179 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.efW 00:11:06.179 16:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.efW 00:11:06.438 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.rlF ]] 00:11:06.438 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rlF 00:11:06.438 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.438 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.438 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.438 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rlF 00:11:06.438 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rlF 00:11:06.696 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:06.696 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.98y 00:11:06.696 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.696 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.696 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.696 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.98y 00:11:06.696 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.98y 00:11:07.264 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.lNT ]] 00:11:07.264 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lNT 00:11:07.264 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.264 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.264 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.264 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lNT 00:11:07.264 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lNT 00:11:07.264 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:07.264 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.DnI 00:11:07.264 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.264 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.264 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.264 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.DnI 00:11:07.264 16:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.DnI 00:11:07.523 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Ckm ]] 00:11:07.523 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ckm 00:11:07.523 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.523 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.523 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.523 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ckm 00:11:07.523 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ckm 00:11:07.782 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:07.782 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Uu1 00:11:07.782 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.782 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.782 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.782 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Uu1 00:11:07.782 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Uu1 00:11:08.041 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:08.041 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:08.041 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:08.041 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:08.041 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:08.041 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:08.300 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:08.300 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.300 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:08.300 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:08.300 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:08.300 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.300 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.300 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.300 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.300 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.300 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.300 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.300 16:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.559 00:11:08.818 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.818 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.818 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.077 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.077 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.077 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.077 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.077 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.077 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:09.077 { 00:11:09.077 "cntlid": 1, 00:11:09.077 "qid": 0, 00:11:09.077 "state": "enabled", 00:11:09.077 "thread": "nvmf_tgt_poll_group_000", 00:11:09.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:09.077 "listen_address": { 00:11:09.077 "trtype": "TCP", 00:11:09.077 "adrfam": "IPv4", 00:11:09.077 "traddr": "10.0.0.3", 00:11:09.077 "trsvcid": "4420" 00:11:09.077 }, 00:11:09.077 "peer_address": { 00:11:09.077 "trtype": "TCP", 00:11:09.077 "adrfam": "IPv4", 00:11:09.077 "traddr": "10.0.0.1", 00:11:09.077 "trsvcid": "47096" 00:11:09.077 }, 00:11:09.077 "auth": { 00:11:09.077 "state": "completed", 00:11:09.077 "digest": "sha256", 00:11:09.077 "dhgroup": "null" 00:11:09.077 } 00:11:09.077 } 00:11:09.077 ]' 00:11:09.077 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:09.077 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:09.077 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:09.077 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:09.077 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:09.077 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.077 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.077 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.337 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:11:09.337 16:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.603 16:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.603 00:11:14.603 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.603 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.603 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.862 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.862 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.862 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.862 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.862 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.862 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.862 { 00:11:14.862 "cntlid": 3, 00:11:14.862 "qid": 0, 00:11:14.862 "state": "enabled", 00:11:14.862 "thread": "nvmf_tgt_poll_group_000", 00:11:14.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:14.862 "listen_address": { 00:11:14.862 "trtype": "TCP", 00:11:14.862 "adrfam": "IPv4", 00:11:14.862 "traddr": "10.0.0.3", 00:11:14.862 "trsvcid": "4420" 00:11:14.862 }, 00:11:14.862 "peer_address": { 00:11:14.862 "trtype": "TCP", 00:11:14.862 "adrfam": "IPv4", 00:11:14.862 "traddr": "10.0.0.1", 00:11:14.862 "trsvcid": "53080" 00:11:14.862 }, 00:11:14.862 "auth": { 00:11:14.862 "state": "completed", 00:11:14.862 "digest": "sha256", 00:11:14.862 "dhgroup": "null" 00:11:14.862 } 00:11:14.862 } 00:11:14.862 ]' 00:11:14.862 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.862 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:14.862 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.121 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:15.121 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.121 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.121 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.121 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.380 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:11:15.380 16:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:11:16.315 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.315 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:16.315 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.315 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.315 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.315 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.315 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:16.315 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:16.574 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:16.574 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.574 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:16.574 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:16.574 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:16.574 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.574 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.574 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.574 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.574 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.574 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.574 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.575 16:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.833 00:11:16.833 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.833 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:16.833 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.093 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.093 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.093 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.093 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.093 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.093 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.093 { 00:11:17.093 "cntlid": 5, 00:11:17.093 "qid": 0, 00:11:17.093 "state": "enabled", 00:11:17.093 "thread": "nvmf_tgt_poll_group_000", 00:11:17.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:17.093 "listen_address": { 00:11:17.093 "trtype": "TCP", 00:11:17.093 "adrfam": "IPv4", 00:11:17.093 "traddr": "10.0.0.3", 00:11:17.093 "trsvcid": "4420" 00:11:17.093 }, 00:11:17.093 "peer_address": { 00:11:17.093 "trtype": "TCP", 00:11:17.093 "adrfam": "IPv4", 00:11:17.093 "traddr": "10.0.0.1", 00:11:17.093 "trsvcid": "53108" 00:11:17.093 }, 00:11:17.093 "auth": { 00:11:17.093 "state": "completed", 00:11:17.093 "digest": "sha256", 00:11:17.093 "dhgroup": "null" 00:11:17.093 } 00:11:17.093 } 00:11:17.093 ]' 00:11:17.093 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.093 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.093 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.352 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:17.352 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.352 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.352 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.352 16:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.611 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:11:17.611 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:11:18.549 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.549 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:18.549 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.549 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.549 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.549 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.549 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:18.549 16:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:18.549 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:18.549 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.549 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:18.549 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:18.549 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:18.549 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.549 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:11:18.549 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.549 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.549 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.549 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:18.549 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:18.549 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:19.117 00:11:19.117 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:19.117 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.117 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:19.376 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.376 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.376 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.376 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.376 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.376 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.376 { 00:11:19.376 "cntlid": 7, 00:11:19.376 "qid": 0, 00:11:19.376 "state": "enabled", 00:11:19.376 "thread": "nvmf_tgt_poll_group_000", 00:11:19.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:19.376 "listen_address": { 00:11:19.376 "trtype": "TCP", 00:11:19.376 "adrfam": "IPv4", 00:11:19.376 "traddr": "10.0.0.3", 00:11:19.376 "trsvcid": "4420" 00:11:19.376 }, 00:11:19.376 "peer_address": { 00:11:19.376 "trtype": "TCP", 00:11:19.376 "adrfam": "IPv4", 00:11:19.376 "traddr": "10.0.0.1", 00:11:19.376 "trsvcid": "53142" 00:11:19.376 }, 00:11:19.376 "auth": { 00:11:19.376 "state": "completed", 00:11:19.376 "digest": "sha256", 00:11:19.376 "dhgroup": "null" 00:11:19.376 } 00:11:19.376 } 00:11:19.376 ]' 00:11:19.376 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.376 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:19.376 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.376 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:19.376 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.376 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.376 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.376 16:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.943 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:11:19.943 16:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:11:20.511 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.511 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:20.511 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.511 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.511 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.511 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:20.511 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.511 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:20.511 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:20.769 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:20.769 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.769 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:20.769 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:20.769 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:20.769 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.769 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.769 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.769 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.769 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.769 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.769 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.769 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.335 00:11:21.335 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.335 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.335 16:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.594 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.594 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.594 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.594 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.594 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.594 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.594 { 00:11:21.594 "cntlid": 9, 00:11:21.594 "qid": 0, 00:11:21.594 "state": "enabled", 00:11:21.594 "thread": "nvmf_tgt_poll_group_000", 00:11:21.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:21.594 "listen_address": { 00:11:21.594 "trtype": "TCP", 00:11:21.594 "adrfam": "IPv4", 00:11:21.594 "traddr": "10.0.0.3", 00:11:21.594 "trsvcid": "4420" 00:11:21.594 }, 00:11:21.594 "peer_address": { 00:11:21.594 "trtype": "TCP", 00:11:21.594 "adrfam": "IPv4", 00:11:21.594 "traddr": "10.0.0.1", 00:11:21.594 "trsvcid": "47418" 00:11:21.594 }, 00:11:21.594 "auth": { 00:11:21.594 "state": "completed", 00:11:21.594 "digest": "sha256", 00:11:21.594 "dhgroup": "ffdhe2048" 00:11:21.594 } 00:11:21.594 } 00:11:21.594 ]' 00:11:21.594 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.594 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.594 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.594 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:21.594 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.594 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.594 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.594 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.161 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:11:22.161 16:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:11:22.728 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.728 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:22.728 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.728 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.728 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.728 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.728 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:22.728 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:22.987 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:22.987 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.987 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:22.987 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:22.987 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:22.987 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.987 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.987 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.987 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.987 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.987 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.987 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.987 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.246 00:11:23.246 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.246 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.246 16:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.813 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.813 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.813 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.813 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.813 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.813 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.813 { 00:11:23.813 "cntlid": 11, 00:11:23.813 "qid": 0, 00:11:23.813 "state": "enabled", 00:11:23.813 "thread": "nvmf_tgt_poll_group_000", 00:11:23.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:23.813 "listen_address": { 00:11:23.813 "trtype": "TCP", 00:11:23.813 "adrfam": "IPv4", 00:11:23.813 "traddr": "10.0.0.3", 00:11:23.813 "trsvcid": "4420" 00:11:23.813 }, 00:11:23.813 "peer_address": { 00:11:23.813 "trtype": "TCP", 00:11:23.813 "adrfam": "IPv4", 00:11:23.813 "traddr": "10.0.0.1", 00:11:23.813 "trsvcid": "47436" 00:11:23.813 }, 00:11:23.813 "auth": { 00:11:23.813 "state": "completed", 00:11:23.813 "digest": "sha256", 00:11:23.813 "dhgroup": "ffdhe2048" 00:11:23.813 } 00:11:23.813 } 00:11:23.813 ]' 00:11:23.813 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.813 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:23.813 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.813 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:23.813 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.813 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.813 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.814 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.071 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:11:24.071 16:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.005 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.006 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.006 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.573 00:11:25.573 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.573 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.573 16:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.832 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.832 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.832 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.832 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.832 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.832 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.832 { 00:11:25.832 "cntlid": 13, 00:11:25.832 "qid": 0, 00:11:25.832 "state": "enabled", 00:11:25.832 "thread": "nvmf_tgt_poll_group_000", 00:11:25.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:25.832 "listen_address": { 00:11:25.832 "trtype": "TCP", 00:11:25.832 "adrfam": "IPv4", 00:11:25.832 "traddr": "10.0.0.3", 00:11:25.832 "trsvcid": "4420" 00:11:25.832 }, 00:11:25.832 "peer_address": { 00:11:25.832 "trtype": "TCP", 00:11:25.832 "adrfam": "IPv4", 00:11:25.832 "traddr": "10.0.0.1", 00:11:25.832 "trsvcid": "47472" 00:11:25.832 }, 00:11:25.832 "auth": { 00:11:25.832 "state": "completed", 00:11:25.832 "digest": "sha256", 00:11:25.832 "dhgroup": "ffdhe2048" 00:11:25.832 } 00:11:25.832 } 00:11:25.832 ]' 00:11:25.832 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.832 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:25.832 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.832 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:25.832 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.832 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.832 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.832 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.090 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:11:26.091 16:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.054 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:27.323 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:27.323 16:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:27.582 00:11:27.582 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.582 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.582 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.840 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.840 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.840 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.840 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.840 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.840 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.840 { 00:11:27.840 "cntlid": 15, 00:11:27.840 "qid": 0, 00:11:27.840 "state": "enabled", 00:11:27.840 "thread": "nvmf_tgt_poll_group_000", 00:11:27.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:27.840 "listen_address": { 00:11:27.840 "trtype": "TCP", 00:11:27.840 "adrfam": "IPv4", 00:11:27.840 "traddr": "10.0.0.3", 00:11:27.840 "trsvcid": "4420" 00:11:27.840 }, 00:11:27.840 "peer_address": { 00:11:27.840 "trtype": "TCP", 00:11:27.840 "adrfam": "IPv4", 00:11:27.840 "traddr": "10.0.0.1", 00:11:27.840 "trsvcid": "47498" 00:11:27.840 }, 00:11:27.840 "auth": { 00:11:27.840 "state": "completed", 00:11:27.840 "digest": "sha256", 00:11:27.840 "dhgroup": "ffdhe2048" 00:11:27.840 } 00:11:27.840 } 00:11:27.840 ]' 00:11:27.840 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.840 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:27.840 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.840 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:27.840 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.840 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.840 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.840 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.407 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:11:28.407 16:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:11:28.973 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.973 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:28.973 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.973 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.973 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.973 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:28.973 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.973 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:28.973 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:29.232 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:29.232 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.232 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:29.232 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:29.232 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:29.232 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.232 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.232 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.232 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.232 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.232 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.232 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.232 16:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.490 00:11:29.748 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.748 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.748 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.007 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.007 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.007 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.007 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.007 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.007 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:30.007 { 00:11:30.007 "cntlid": 17, 00:11:30.007 "qid": 0, 00:11:30.007 "state": "enabled", 00:11:30.007 "thread": "nvmf_tgt_poll_group_000", 00:11:30.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:30.007 "listen_address": { 00:11:30.007 "trtype": "TCP", 00:11:30.007 "adrfam": "IPv4", 00:11:30.007 "traddr": "10.0.0.3", 00:11:30.007 "trsvcid": "4420" 00:11:30.007 }, 00:11:30.007 "peer_address": { 00:11:30.007 "trtype": "TCP", 00:11:30.007 "adrfam": "IPv4", 00:11:30.007 "traddr": "10.0.0.1", 00:11:30.007 "trsvcid": "47516" 00:11:30.007 }, 00:11:30.007 "auth": { 00:11:30.007 "state": "completed", 00:11:30.007 "digest": "sha256", 00:11:30.007 "dhgroup": "ffdhe3072" 00:11:30.007 } 00:11:30.007 } 00:11:30.007 ]' 00:11:30.007 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:30.007 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:30.007 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:30.007 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:30.007 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:30.007 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.007 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.007 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.265 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:11:30.265 16:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.199 16:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.766 00:11:31.766 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.766 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.766 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.024 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.025 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.025 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.025 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.025 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.025 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.025 { 00:11:32.025 "cntlid": 19, 00:11:32.025 "qid": 0, 00:11:32.025 "state": "enabled", 00:11:32.025 "thread": "nvmf_tgt_poll_group_000", 00:11:32.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:32.025 "listen_address": { 00:11:32.025 "trtype": "TCP", 00:11:32.025 "adrfam": "IPv4", 00:11:32.025 "traddr": "10.0.0.3", 00:11:32.025 "trsvcid": "4420" 00:11:32.025 }, 00:11:32.025 "peer_address": { 00:11:32.025 "trtype": "TCP", 00:11:32.025 "adrfam": "IPv4", 00:11:32.025 "traddr": "10.0.0.1", 00:11:32.025 "trsvcid": "47500" 00:11:32.025 }, 00:11:32.025 "auth": { 00:11:32.025 "state": "completed", 00:11:32.025 "digest": "sha256", 00:11:32.025 "dhgroup": "ffdhe3072" 00:11:32.025 } 00:11:32.025 } 00:11:32.025 ]' 00:11:32.025 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.025 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:32.025 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:32.025 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:32.025 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.025 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.025 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.025 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.283 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:11:32.283 16:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.219 16:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.791 00:11:33.791 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.791 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.791 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.049 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.049 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.049 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.049 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.049 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.049 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.049 { 00:11:34.049 "cntlid": 21, 00:11:34.049 "qid": 0, 00:11:34.049 "state": "enabled", 00:11:34.049 "thread": "nvmf_tgt_poll_group_000", 00:11:34.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:34.049 "listen_address": { 00:11:34.049 "trtype": "TCP", 00:11:34.049 "adrfam": "IPv4", 00:11:34.049 "traddr": "10.0.0.3", 00:11:34.049 "trsvcid": "4420" 00:11:34.049 }, 00:11:34.049 "peer_address": { 00:11:34.049 "trtype": "TCP", 00:11:34.049 "adrfam": "IPv4", 00:11:34.049 "traddr": "10.0.0.1", 00:11:34.049 "trsvcid": "47538" 00:11:34.049 }, 00:11:34.049 "auth": { 00:11:34.049 "state": "completed", 00:11:34.049 "digest": "sha256", 00:11:34.049 "dhgroup": "ffdhe3072" 00:11:34.049 } 00:11:34.049 } 00:11:34.049 ]' 00:11:34.049 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.049 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:34.049 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.049 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:34.049 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.049 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.049 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.049 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.307 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:11:34.307 16:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:11:35.242 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.242 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:35.242 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.242 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.242 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.242 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.242 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:35.242 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:35.501 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:35.501 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.501 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:35.501 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:35.501 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:35.501 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.501 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:11:35.501 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.501 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.501 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.501 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:35.501 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:35.501 16:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:35.760 00:11:35.760 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.760 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.760 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.019 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.019 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.019 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.019 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.019 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.019 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.019 { 00:11:36.019 "cntlid": 23, 00:11:36.019 "qid": 0, 00:11:36.019 "state": "enabled", 00:11:36.019 "thread": "nvmf_tgt_poll_group_000", 00:11:36.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:36.019 "listen_address": { 00:11:36.019 "trtype": "TCP", 00:11:36.019 "adrfam": "IPv4", 00:11:36.019 "traddr": "10.0.0.3", 00:11:36.019 "trsvcid": "4420" 00:11:36.019 }, 00:11:36.019 "peer_address": { 00:11:36.019 "trtype": "TCP", 00:11:36.019 "adrfam": "IPv4", 00:11:36.019 "traddr": "10.0.0.1", 00:11:36.019 "trsvcid": "47564" 00:11:36.019 }, 00:11:36.019 "auth": { 00:11:36.019 "state": "completed", 00:11:36.019 "digest": "sha256", 00:11:36.019 "dhgroup": "ffdhe3072" 00:11:36.019 } 00:11:36.019 } 00:11:36.019 ]' 00:11:36.019 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.278 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:36.278 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.278 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:36.278 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.278 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.278 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.278 16:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.537 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:11:36.537 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:11:37.105 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.105 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:37.105 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.105 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.105 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.105 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:37.105 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.105 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:37.105 16:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:37.673 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:37.673 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.673 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:37.673 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:37.673 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:37.673 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.673 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.673 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.673 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.673 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.673 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.673 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.673 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.932 00:11:37.932 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.932 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.932 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.192 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.192 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.192 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.192 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.192 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.192 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.192 { 00:11:38.192 "cntlid": 25, 00:11:38.192 "qid": 0, 00:11:38.192 "state": "enabled", 00:11:38.192 "thread": "nvmf_tgt_poll_group_000", 00:11:38.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:38.192 "listen_address": { 00:11:38.192 "trtype": "TCP", 00:11:38.192 "adrfam": "IPv4", 00:11:38.192 "traddr": "10.0.0.3", 00:11:38.192 "trsvcid": "4420" 00:11:38.192 }, 00:11:38.192 "peer_address": { 00:11:38.192 "trtype": "TCP", 00:11:38.192 "adrfam": "IPv4", 00:11:38.192 "traddr": "10.0.0.1", 00:11:38.192 "trsvcid": "47592" 00:11:38.192 }, 00:11:38.192 "auth": { 00:11:38.192 "state": "completed", 00:11:38.192 "digest": "sha256", 00:11:38.192 "dhgroup": "ffdhe4096" 00:11:38.192 } 00:11:38.192 } 00:11:38.192 ]' 00:11:38.192 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.192 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:38.192 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.192 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:38.192 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.192 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.192 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.192 16:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.759 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:11:38.759 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:11:39.327 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.328 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:39.328 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.328 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.328 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.328 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.328 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:39.328 16:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:39.586 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:39.586 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.586 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:39.586 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:39.586 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:39.586 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.586 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.586 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.587 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.587 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.587 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.587 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.587 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.154 00:11:40.154 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.154 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.154 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.413 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.413 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.413 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.413 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.413 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.413 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.413 { 00:11:40.413 "cntlid": 27, 00:11:40.413 "qid": 0, 00:11:40.413 "state": "enabled", 00:11:40.413 "thread": "nvmf_tgt_poll_group_000", 00:11:40.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:40.413 "listen_address": { 00:11:40.413 "trtype": "TCP", 00:11:40.413 "adrfam": "IPv4", 00:11:40.413 "traddr": "10.0.0.3", 00:11:40.413 "trsvcid": "4420" 00:11:40.413 }, 00:11:40.413 "peer_address": { 00:11:40.413 "trtype": "TCP", 00:11:40.413 "adrfam": "IPv4", 00:11:40.413 "traddr": "10.0.0.1", 00:11:40.413 "trsvcid": "52608" 00:11:40.413 }, 00:11:40.413 "auth": { 00:11:40.413 "state": "completed", 00:11:40.413 "digest": "sha256", 00:11:40.413 "dhgroup": "ffdhe4096" 00:11:40.413 } 00:11:40.413 } 00:11:40.413 ]' 00:11:40.413 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.413 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:40.413 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.413 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:40.413 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.413 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.413 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.413 16:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.672 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:11:40.672 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:11:41.608 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.608 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:41.608 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.608 16:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.608 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.608 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.608 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:41.608 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:41.866 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:41.866 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.866 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:41.866 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:41.866 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:41.866 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.866 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.866 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.866 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.866 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.866 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.866 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.867 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.125 00:11:42.125 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.125 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.125 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.385 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.385 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.385 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.385 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.385 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.385 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.385 { 00:11:42.385 "cntlid": 29, 00:11:42.385 "qid": 0, 00:11:42.385 "state": "enabled", 00:11:42.385 "thread": "nvmf_tgt_poll_group_000", 00:11:42.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:42.385 "listen_address": { 00:11:42.385 "trtype": "TCP", 00:11:42.385 "adrfam": "IPv4", 00:11:42.385 "traddr": "10.0.0.3", 00:11:42.385 "trsvcid": "4420" 00:11:42.385 }, 00:11:42.385 "peer_address": { 00:11:42.385 "trtype": "TCP", 00:11:42.385 "adrfam": "IPv4", 00:11:42.385 "traddr": "10.0.0.1", 00:11:42.385 "trsvcid": "52630" 00:11:42.385 }, 00:11:42.385 "auth": { 00:11:42.385 "state": "completed", 00:11:42.385 "digest": "sha256", 00:11:42.385 "dhgroup": "ffdhe4096" 00:11:42.385 } 00:11:42.385 } 00:11:42.385 ]' 00:11:42.385 16:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.385 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:42.385 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.644 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:42.644 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.644 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.644 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.644 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.903 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:11:42.903 16:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:11:43.502 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.502 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:43.502 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.502 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.502 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.502 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.502 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:43.502 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:43.769 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:43.769 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.769 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:43.769 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:43.769 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:43.769 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.769 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:11:43.769 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.769 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.769 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.769 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:43.769 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:43.769 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:44.027 00:11:44.027 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.027 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.286 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.545 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.545 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.545 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.545 16:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.545 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.545 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.545 { 00:11:44.545 "cntlid": 31, 00:11:44.545 "qid": 0, 00:11:44.545 "state": "enabled", 00:11:44.545 "thread": "nvmf_tgt_poll_group_000", 00:11:44.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:44.545 "listen_address": { 00:11:44.545 "trtype": "TCP", 00:11:44.545 "adrfam": "IPv4", 00:11:44.545 "traddr": "10.0.0.3", 00:11:44.545 "trsvcid": "4420" 00:11:44.545 }, 00:11:44.545 "peer_address": { 00:11:44.545 "trtype": "TCP", 00:11:44.545 "adrfam": "IPv4", 00:11:44.545 "traddr": "10.0.0.1", 00:11:44.545 "trsvcid": "52666" 00:11:44.545 }, 00:11:44.545 "auth": { 00:11:44.545 "state": "completed", 00:11:44.545 "digest": "sha256", 00:11:44.545 "dhgroup": "ffdhe4096" 00:11:44.545 } 00:11:44.545 } 00:11:44.545 ]' 00:11:44.545 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.545 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.545 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.545 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:44.545 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.545 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.545 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.545 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.141 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:11:45.141 16:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.707 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.964 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.965 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.965 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.965 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.531 00:11:46.531 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.531 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.531 16:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.789 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.789 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.789 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.789 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.789 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.789 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.789 { 00:11:46.789 "cntlid": 33, 00:11:46.789 "qid": 0, 00:11:46.789 "state": "enabled", 00:11:46.790 "thread": "nvmf_tgt_poll_group_000", 00:11:46.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:46.790 "listen_address": { 00:11:46.790 "trtype": "TCP", 00:11:46.790 "adrfam": "IPv4", 00:11:46.790 "traddr": "10.0.0.3", 00:11:46.790 "trsvcid": "4420" 00:11:46.790 }, 00:11:46.790 "peer_address": { 00:11:46.790 "trtype": "TCP", 00:11:46.790 "adrfam": "IPv4", 00:11:46.790 "traddr": "10.0.0.1", 00:11:46.790 "trsvcid": "52692" 00:11:46.790 }, 00:11:46.790 "auth": { 00:11:46.790 "state": "completed", 00:11:46.790 "digest": "sha256", 00:11:46.790 "dhgroup": "ffdhe6144" 00:11:46.790 } 00:11:46.790 } 00:11:46.790 ]' 00:11:46.790 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.790 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:46.790 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.790 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:46.790 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.790 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.790 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.790 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.049 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:11:47.049 16:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:11:47.986 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.986 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:47.986 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.986 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.986 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.986 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.986 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:47.986 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:47.986 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:47.986 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.986 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:47.986 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:47.986 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:47.986 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.986 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.987 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.987 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.987 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.987 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.987 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.987 16:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.556 00:11:48.556 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.556 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.556 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.815 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.815 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.815 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.815 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.815 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.815 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.815 { 00:11:48.815 "cntlid": 35, 00:11:48.815 "qid": 0, 00:11:48.815 "state": "enabled", 00:11:48.815 "thread": "nvmf_tgt_poll_group_000", 00:11:48.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:48.815 "listen_address": { 00:11:48.815 "trtype": "TCP", 00:11:48.815 "adrfam": "IPv4", 00:11:48.815 "traddr": "10.0.0.3", 00:11:48.815 "trsvcid": "4420" 00:11:48.815 }, 00:11:48.815 "peer_address": { 00:11:48.815 "trtype": "TCP", 00:11:48.815 "adrfam": "IPv4", 00:11:48.815 "traddr": "10.0.0.1", 00:11:48.815 "trsvcid": "52718" 00:11:48.815 }, 00:11:48.815 "auth": { 00:11:48.815 "state": "completed", 00:11:48.815 "digest": "sha256", 00:11:48.815 "dhgroup": "ffdhe6144" 00:11:48.815 } 00:11:48.815 } 00:11:48.815 ]' 00:11:48.815 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.815 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:48.815 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.073 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:49.073 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.073 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.073 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.073 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.332 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:11:49.332 16:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:11:49.900 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.900 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:49.900 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.900 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.900 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.900 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.900 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:49.900 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:50.160 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:50.160 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.160 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:50.160 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:50.160 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:50.160 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.160 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.160 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.160 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.160 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.160 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.160 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.160 16:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.729 00:11:50.729 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.729 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.729 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.988 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.988 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.988 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.988 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.988 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.988 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.988 { 00:11:50.988 "cntlid": 37, 00:11:50.988 "qid": 0, 00:11:50.988 "state": "enabled", 00:11:50.988 "thread": "nvmf_tgt_poll_group_000", 00:11:50.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:50.988 "listen_address": { 00:11:50.988 "trtype": "TCP", 00:11:50.988 "adrfam": "IPv4", 00:11:50.988 "traddr": "10.0.0.3", 00:11:50.988 "trsvcid": "4420" 00:11:50.988 }, 00:11:50.988 "peer_address": { 00:11:50.988 "trtype": "TCP", 00:11:50.988 "adrfam": "IPv4", 00:11:50.988 "traddr": "10.0.0.1", 00:11:50.988 "trsvcid": "47484" 00:11:50.988 }, 00:11:50.988 "auth": { 00:11:50.988 "state": "completed", 00:11:50.988 "digest": "sha256", 00:11:50.988 "dhgroup": "ffdhe6144" 00:11:50.988 } 00:11:50.988 } 00:11:50.988 ]' 00:11:50.988 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.988 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:50.988 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.988 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:50.988 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.988 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.988 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.988 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.247 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:11:51.247 16:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:11:52.183 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.183 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:52.183 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.183 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.183 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.183 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.183 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:52.183 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:52.443 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:52.443 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.443 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:52.443 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:52.443 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:52.443 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.443 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:11:52.443 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.443 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.443 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.443 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:52.443 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:52.443 16:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:53.021 00:11:53.021 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.021 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.021 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.280 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.280 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.280 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.280 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.280 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.280 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.280 { 00:11:53.281 "cntlid": 39, 00:11:53.281 "qid": 0, 00:11:53.281 "state": "enabled", 00:11:53.281 "thread": "nvmf_tgt_poll_group_000", 00:11:53.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:53.281 "listen_address": { 00:11:53.281 "trtype": "TCP", 00:11:53.281 "adrfam": "IPv4", 00:11:53.281 "traddr": "10.0.0.3", 00:11:53.281 "trsvcid": "4420" 00:11:53.281 }, 00:11:53.281 "peer_address": { 00:11:53.281 "trtype": "TCP", 00:11:53.281 "adrfam": "IPv4", 00:11:53.281 "traddr": "10.0.0.1", 00:11:53.281 "trsvcid": "47504" 00:11:53.281 }, 00:11:53.281 "auth": { 00:11:53.281 "state": "completed", 00:11:53.281 "digest": "sha256", 00:11:53.281 "dhgroup": "ffdhe6144" 00:11:53.281 } 00:11:53.281 } 00:11:53.281 ]' 00:11:53.281 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.281 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:53.281 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:53.281 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:53.281 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:53.281 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.281 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.281 16:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.539 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:11:53.539 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:11:54.475 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.475 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:54.476 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.476 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.476 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.476 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:54.476 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.476 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:54.476 16:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:54.735 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:54.735 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.735 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:54.735 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:54.735 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:54.735 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.735 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.735 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.735 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.735 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.735 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.735 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.735 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:55.302 00:11:55.302 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.302 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.302 16:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.560 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.560 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.560 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.560 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.560 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.560 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.560 { 00:11:55.560 "cntlid": 41, 00:11:55.560 "qid": 0, 00:11:55.560 "state": "enabled", 00:11:55.560 "thread": "nvmf_tgt_poll_group_000", 00:11:55.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:55.560 "listen_address": { 00:11:55.560 "trtype": "TCP", 00:11:55.560 "adrfam": "IPv4", 00:11:55.560 "traddr": "10.0.0.3", 00:11:55.560 "trsvcid": "4420" 00:11:55.560 }, 00:11:55.560 "peer_address": { 00:11:55.560 "trtype": "TCP", 00:11:55.560 "adrfam": "IPv4", 00:11:55.560 "traddr": "10.0.0.1", 00:11:55.560 "trsvcid": "47526" 00:11:55.560 }, 00:11:55.560 "auth": { 00:11:55.560 "state": "completed", 00:11:55.560 "digest": "sha256", 00:11:55.560 "dhgroup": "ffdhe8192" 00:11:55.560 } 00:11:55.560 } 00:11:55.560 ]' 00:11:55.560 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.560 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:55.560 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.560 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:55.560 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.819 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.819 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.819 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.078 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:11:56.078 16:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:11:56.645 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.645 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:56.645 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.645 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.645 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.645 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.645 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:56.645 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:56.903 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:56.903 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.903 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:56.903 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:56.903 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:56.903 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.903 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.903 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.903 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.903 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.903 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.903 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.903 16:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.471 00:11:57.471 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.471 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.471 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.730 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.730 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.730 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.730 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.730 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.730 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.730 { 00:11:57.730 "cntlid": 43, 00:11:57.730 "qid": 0, 00:11:57.730 "state": "enabled", 00:11:57.730 "thread": "nvmf_tgt_poll_group_000", 00:11:57.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:11:57.730 "listen_address": { 00:11:57.730 "trtype": "TCP", 00:11:57.730 "adrfam": "IPv4", 00:11:57.730 "traddr": "10.0.0.3", 00:11:57.730 "trsvcid": "4420" 00:11:57.730 }, 00:11:57.730 "peer_address": { 00:11:57.730 "trtype": "TCP", 00:11:57.730 "adrfam": "IPv4", 00:11:57.730 "traddr": "10.0.0.1", 00:11:57.730 "trsvcid": "47566" 00:11:57.730 }, 00:11:57.730 "auth": { 00:11:57.730 "state": "completed", 00:11:57.730 "digest": "sha256", 00:11:57.730 "dhgroup": "ffdhe8192" 00:11:57.730 } 00:11:57.730 } 00:11:57.730 ]' 00:11:57.730 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.988 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:57.988 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.988 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:57.988 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.988 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.988 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.988 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.246 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:11:58.246 16:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:11:58.813 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.814 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:11:58.814 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.814 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.814 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.814 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.814 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:58.814 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:59.113 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:59.113 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.113 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:59.113 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:59.113 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:59.113 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.113 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.113 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.113 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.113 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.113 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.113 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.114 16:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.707 00:11:59.707 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.707 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.707 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.966 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.225 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.225 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.225 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.225 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.225 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.225 { 00:12:00.225 "cntlid": 45, 00:12:00.225 "qid": 0, 00:12:00.225 "state": "enabled", 00:12:00.225 "thread": "nvmf_tgt_poll_group_000", 00:12:00.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:00.225 "listen_address": { 00:12:00.225 "trtype": "TCP", 00:12:00.225 "adrfam": "IPv4", 00:12:00.225 "traddr": "10.0.0.3", 00:12:00.225 "trsvcid": "4420" 00:12:00.225 }, 00:12:00.225 "peer_address": { 00:12:00.225 "trtype": "TCP", 00:12:00.225 "adrfam": "IPv4", 00:12:00.225 "traddr": "10.0.0.1", 00:12:00.225 "trsvcid": "47596" 00:12:00.225 }, 00:12:00.225 "auth": { 00:12:00.225 "state": "completed", 00:12:00.225 "digest": "sha256", 00:12:00.225 "dhgroup": "ffdhe8192" 00:12:00.225 } 00:12:00.225 } 00:12:00.225 ]' 00:12:00.225 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.225 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:00.225 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.225 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:00.225 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.225 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.225 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.225 16:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.484 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:12:00.484 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:12:01.421 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.421 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:01.421 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.421 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.421 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.421 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.421 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:01.421 16:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:01.421 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:12:01.421 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.421 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:01.421 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:01.421 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:01.421 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.421 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:12:01.421 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.421 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.421 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.421 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:01.421 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:01.421 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:02.357 00:12:02.357 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.357 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.357 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.357 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.357 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.357 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.357 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.358 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.358 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.358 { 00:12:02.358 "cntlid": 47, 00:12:02.358 "qid": 0, 00:12:02.358 "state": "enabled", 00:12:02.358 "thread": "nvmf_tgt_poll_group_000", 00:12:02.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:02.358 "listen_address": { 00:12:02.358 "trtype": "TCP", 00:12:02.358 "adrfam": "IPv4", 00:12:02.358 "traddr": "10.0.0.3", 00:12:02.358 "trsvcid": "4420" 00:12:02.358 }, 00:12:02.358 "peer_address": { 00:12:02.358 "trtype": "TCP", 00:12:02.358 "adrfam": "IPv4", 00:12:02.358 "traddr": "10.0.0.1", 00:12:02.358 "trsvcid": "50612" 00:12:02.358 }, 00:12:02.358 "auth": { 00:12:02.358 "state": "completed", 00:12:02.358 "digest": "sha256", 00:12:02.358 "dhgroup": "ffdhe8192" 00:12:02.358 } 00:12:02.358 } 00:12:02.358 ]' 00:12:02.358 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.358 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:02.358 16:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.617 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:02.617 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.617 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.617 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.617 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.876 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:12:02.876 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:12:03.444 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.444 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:03.444 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.444 16:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.444 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.444 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:03.444 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:03.444 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.444 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:03.444 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:03.705 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:12:03.705 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.705 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:03.705 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:03.705 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:03.705 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.705 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.705 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.705 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.705 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.705 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.705 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.705 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.271 00:12:04.271 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.271 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.271 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.271 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.530 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.530 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.530 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.530 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.530 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.530 { 00:12:04.530 "cntlid": 49, 00:12:04.530 "qid": 0, 00:12:04.530 "state": "enabled", 00:12:04.530 "thread": "nvmf_tgt_poll_group_000", 00:12:04.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:04.530 "listen_address": { 00:12:04.530 "trtype": "TCP", 00:12:04.530 "adrfam": "IPv4", 00:12:04.530 "traddr": "10.0.0.3", 00:12:04.530 "trsvcid": "4420" 00:12:04.530 }, 00:12:04.530 "peer_address": { 00:12:04.530 "trtype": "TCP", 00:12:04.530 "adrfam": "IPv4", 00:12:04.530 "traddr": "10.0.0.1", 00:12:04.530 "trsvcid": "50646" 00:12:04.530 }, 00:12:04.530 "auth": { 00:12:04.530 "state": "completed", 00:12:04.530 "digest": "sha384", 00:12:04.530 "dhgroup": "null" 00:12:04.530 } 00:12:04.530 } 00:12:04.530 ]' 00:12:04.530 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.530 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.530 16:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.530 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:04.530 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.530 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.530 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.530 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.789 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:12:04.789 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:12:05.357 16:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.616 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:05.616 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.616 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.616 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.616 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.616 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:05.616 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:05.875 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:12:05.876 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.876 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:05.876 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:05.876 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:05.876 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.876 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.876 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.876 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.876 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.876 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.876 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.876 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.135 00:12:06.135 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.135 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.135 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.395 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.395 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.395 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.395 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.395 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.395 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.395 { 00:12:06.395 "cntlid": 51, 00:12:06.395 "qid": 0, 00:12:06.395 "state": "enabled", 00:12:06.395 "thread": "nvmf_tgt_poll_group_000", 00:12:06.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:06.395 "listen_address": { 00:12:06.395 "trtype": "TCP", 00:12:06.395 "adrfam": "IPv4", 00:12:06.395 "traddr": "10.0.0.3", 00:12:06.395 "trsvcid": "4420" 00:12:06.395 }, 00:12:06.395 "peer_address": { 00:12:06.395 "trtype": "TCP", 00:12:06.395 "adrfam": "IPv4", 00:12:06.395 "traddr": "10.0.0.1", 00:12:06.395 "trsvcid": "50682" 00:12:06.395 }, 00:12:06.395 "auth": { 00:12:06.395 "state": "completed", 00:12:06.395 "digest": "sha384", 00:12:06.395 "dhgroup": "null" 00:12:06.395 } 00:12:06.395 } 00:12:06.395 ]' 00:12:06.395 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.395 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.395 16:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.395 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:06.655 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.655 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.655 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.655 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.914 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:12:06.914 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:12:07.481 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.481 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:07.481 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.481 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.481 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.481 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.481 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:07.481 16:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:07.741 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:07.741 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.741 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:07.741 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:07.741 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:07.741 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.741 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.741 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.741 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.741 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.741 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.741 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.741 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.001 00:12:08.001 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.001 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.001 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.261 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.261 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.261 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.261 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.261 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.261 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.261 { 00:12:08.261 "cntlid": 53, 00:12:08.261 "qid": 0, 00:12:08.261 "state": "enabled", 00:12:08.261 "thread": "nvmf_tgt_poll_group_000", 00:12:08.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:08.261 "listen_address": { 00:12:08.261 "trtype": "TCP", 00:12:08.261 "adrfam": "IPv4", 00:12:08.261 "traddr": "10.0.0.3", 00:12:08.261 "trsvcid": "4420" 00:12:08.261 }, 00:12:08.261 "peer_address": { 00:12:08.261 "trtype": "TCP", 00:12:08.261 "adrfam": "IPv4", 00:12:08.261 "traddr": "10.0.0.1", 00:12:08.261 "trsvcid": "50700" 00:12:08.261 }, 00:12:08.261 "auth": { 00:12:08.261 "state": "completed", 00:12:08.261 "digest": "sha384", 00:12:08.261 "dhgroup": "null" 00:12:08.261 } 00:12:08.261 } 00:12:08.261 ]' 00:12:08.261 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.261 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:08.261 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.521 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:08.521 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.521 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.521 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.521 16:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.779 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:12:08.780 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:12:09.348 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.348 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:09.348 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.348 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.348 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.348 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.348 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:09.348 16:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:09.607 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:09.607 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.607 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:09.607 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:09.607 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:09.607 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.607 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:12:09.607 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.607 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.607 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.607 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:09.607 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:09.607 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:09.866 00:12:09.866 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.866 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.866 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.125 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.125 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.125 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.125 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.384 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.384 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.384 { 00:12:10.384 "cntlid": 55, 00:12:10.384 "qid": 0, 00:12:10.384 "state": "enabled", 00:12:10.384 "thread": "nvmf_tgt_poll_group_000", 00:12:10.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:10.384 "listen_address": { 00:12:10.384 "trtype": "TCP", 00:12:10.384 "adrfam": "IPv4", 00:12:10.384 "traddr": "10.0.0.3", 00:12:10.384 "trsvcid": "4420" 00:12:10.384 }, 00:12:10.384 "peer_address": { 00:12:10.384 "trtype": "TCP", 00:12:10.384 "adrfam": "IPv4", 00:12:10.384 "traddr": "10.0.0.1", 00:12:10.384 "trsvcid": "50728" 00:12:10.384 }, 00:12:10.384 "auth": { 00:12:10.384 "state": "completed", 00:12:10.384 "digest": "sha384", 00:12:10.384 "dhgroup": "null" 00:12:10.384 } 00:12:10.384 } 00:12:10.384 ]' 00:12:10.384 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.384 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:10.384 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.384 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:10.384 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.384 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.384 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.384 16:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.642 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:12:10.642 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:12:11.578 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.578 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:11.578 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.578 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.578 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.578 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:11.578 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.578 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:11.578 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:11.578 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:11.578 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.578 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:11.578 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:11.578 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:11.578 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.578 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.578 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.578 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.578 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.578 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.578 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.578 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.146 00:12:12.146 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.146 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.146 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.405 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.405 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.405 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.405 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.405 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.405 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.405 { 00:12:12.405 "cntlid": 57, 00:12:12.405 "qid": 0, 00:12:12.405 "state": "enabled", 00:12:12.405 "thread": "nvmf_tgt_poll_group_000", 00:12:12.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:12.405 "listen_address": { 00:12:12.405 "trtype": "TCP", 00:12:12.405 "adrfam": "IPv4", 00:12:12.405 "traddr": "10.0.0.3", 00:12:12.405 "trsvcid": "4420" 00:12:12.405 }, 00:12:12.405 "peer_address": { 00:12:12.405 "trtype": "TCP", 00:12:12.405 "adrfam": "IPv4", 00:12:12.405 "traddr": "10.0.0.1", 00:12:12.405 "trsvcid": "39192" 00:12:12.405 }, 00:12:12.405 "auth": { 00:12:12.405 "state": "completed", 00:12:12.405 "digest": "sha384", 00:12:12.405 "dhgroup": "ffdhe2048" 00:12:12.405 } 00:12:12.405 } 00:12:12.405 ]' 00:12:12.405 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.405 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.405 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.406 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:12.406 16:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.406 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.406 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.406 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.665 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:12:12.665 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:12:13.649 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.649 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:13.649 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.649 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.649 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.649 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.649 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:13.649 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:13.649 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:13.649 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.649 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:13.649 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:13.649 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:13.649 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.649 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.649 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.649 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.649 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.649 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.649 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.649 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.908 00:12:14.166 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.166 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.167 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.167 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.167 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.167 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.167 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.167 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.167 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.167 { 00:12:14.167 "cntlid": 59, 00:12:14.167 "qid": 0, 00:12:14.167 "state": "enabled", 00:12:14.167 "thread": "nvmf_tgt_poll_group_000", 00:12:14.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:14.167 "listen_address": { 00:12:14.167 "trtype": "TCP", 00:12:14.167 "adrfam": "IPv4", 00:12:14.167 "traddr": "10.0.0.3", 00:12:14.167 "trsvcid": "4420" 00:12:14.167 }, 00:12:14.167 "peer_address": { 00:12:14.167 "trtype": "TCP", 00:12:14.167 "adrfam": "IPv4", 00:12:14.167 "traddr": "10.0.0.1", 00:12:14.167 "trsvcid": "39220" 00:12:14.167 }, 00:12:14.167 "auth": { 00:12:14.167 "state": "completed", 00:12:14.167 "digest": "sha384", 00:12:14.167 "dhgroup": "ffdhe2048" 00:12:14.167 } 00:12:14.167 } 00:12:14.167 ]' 00:12:14.425 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.425 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:14.425 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.425 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:14.425 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.425 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.425 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.425 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.683 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:12:14.683 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:12:15.249 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.249 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:15.249 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.249 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.249 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.249 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.249 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:15.249 16:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:15.817 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:15.817 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.817 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:15.817 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:15.817 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:15.817 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.817 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.817 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.817 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.817 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.817 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.817 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.817 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.075 00:12:16.075 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.076 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.076 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.334 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.334 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.334 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.334 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.334 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.334 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.334 { 00:12:16.334 "cntlid": 61, 00:12:16.334 "qid": 0, 00:12:16.334 "state": "enabled", 00:12:16.334 "thread": "nvmf_tgt_poll_group_000", 00:12:16.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:16.334 "listen_address": { 00:12:16.334 "trtype": "TCP", 00:12:16.334 "adrfam": "IPv4", 00:12:16.334 "traddr": "10.0.0.3", 00:12:16.334 "trsvcid": "4420" 00:12:16.334 }, 00:12:16.334 "peer_address": { 00:12:16.334 "trtype": "TCP", 00:12:16.334 "adrfam": "IPv4", 00:12:16.334 "traddr": "10.0.0.1", 00:12:16.334 "trsvcid": "39248" 00:12:16.334 }, 00:12:16.334 "auth": { 00:12:16.334 "state": "completed", 00:12:16.334 "digest": "sha384", 00:12:16.334 "dhgroup": "ffdhe2048" 00:12:16.334 } 00:12:16.334 } 00:12:16.334 ]' 00:12:16.334 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.334 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:16.334 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.334 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:16.334 16:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.593 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.593 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.593 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.852 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:12:16.852 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:12:17.420 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.420 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:17.420 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.420 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.420 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.420 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.420 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:17.420 16:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:17.678 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:17.678 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.678 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:17.678 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:17.678 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:17.678 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.678 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:12:17.678 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.678 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.678 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.678 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:17.678 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:17.678 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:18.245 00:12:18.245 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.245 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.245 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.245 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.245 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.245 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.245 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.245 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.245 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.245 { 00:12:18.245 "cntlid": 63, 00:12:18.245 "qid": 0, 00:12:18.245 "state": "enabled", 00:12:18.245 "thread": "nvmf_tgt_poll_group_000", 00:12:18.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:18.245 "listen_address": { 00:12:18.245 "trtype": "TCP", 00:12:18.245 "adrfam": "IPv4", 00:12:18.245 "traddr": "10.0.0.3", 00:12:18.245 "trsvcid": "4420" 00:12:18.245 }, 00:12:18.245 "peer_address": { 00:12:18.245 "trtype": "TCP", 00:12:18.245 "adrfam": "IPv4", 00:12:18.245 "traddr": "10.0.0.1", 00:12:18.245 "trsvcid": "39290" 00:12:18.245 }, 00:12:18.245 "auth": { 00:12:18.245 "state": "completed", 00:12:18.245 "digest": "sha384", 00:12:18.245 "dhgroup": "ffdhe2048" 00:12:18.245 } 00:12:18.245 } 00:12:18.245 ]' 00:12:18.245 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.503 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:18.503 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.503 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:18.503 16:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.503 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.503 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.503 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.762 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:12:18.762 16:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:12:19.697 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.697 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:19.697 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.697 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.697 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.697 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:19.697 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.697 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:19.697 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:19.955 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:19.955 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.956 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:19.956 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:19.956 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:19.956 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.956 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.956 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.956 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.956 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.956 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.956 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.956 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.214 00:12:20.214 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.215 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.215 16:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.473 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.473 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.473 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.473 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.473 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.473 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.473 { 00:12:20.473 "cntlid": 65, 00:12:20.473 "qid": 0, 00:12:20.473 "state": "enabled", 00:12:20.473 "thread": "nvmf_tgt_poll_group_000", 00:12:20.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:20.473 "listen_address": { 00:12:20.473 "trtype": "TCP", 00:12:20.473 "adrfam": "IPv4", 00:12:20.473 "traddr": "10.0.0.3", 00:12:20.473 "trsvcid": "4420" 00:12:20.473 }, 00:12:20.473 "peer_address": { 00:12:20.473 "trtype": "TCP", 00:12:20.473 "adrfam": "IPv4", 00:12:20.473 "traddr": "10.0.0.1", 00:12:20.473 "trsvcid": "56028" 00:12:20.473 }, 00:12:20.473 "auth": { 00:12:20.473 "state": "completed", 00:12:20.473 "digest": "sha384", 00:12:20.473 "dhgroup": "ffdhe3072" 00:12:20.473 } 00:12:20.473 } 00:12:20.473 ]' 00:12:20.473 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.732 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:20.732 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.732 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:20.732 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.732 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.732 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.732 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.991 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:12:20.991 16:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:12:21.558 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.558 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:21.558 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.558 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.558 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.559 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.559 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:21.559 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:21.817 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:21.817 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.817 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:21.818 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:21.818 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:21.818 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.818 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.818 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.818 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.818 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.818 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.818 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.818 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.385 00:12:22.385 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.385 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.385 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.385 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.386 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.386 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.386 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.386 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.386 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.386 { 00:12:22.386 "cntlid": 67, 00:12:22.386 "qid": 0, 00:12:22.386 "state": "enabled", 00:12:22.386 "thread": "nvmf_tgt_poll_group_000", 00:12:22.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:22.386 "listen_address": { 00:12:22.386 "trtype": "TCP", 00:12:22.386 "adrfam": "IPv4", 00:12:22.386 "traddr": "10.0.0.3", 00:12:22.386 "trsvcid": "4420" 00:12:22.386 }, 00:12:22.386 "peer_address": { 00:12:22.386 "trtype": "TCP", 00:12:22.386 "adrfam": "IPv4", 00:12:22.386 "traddr": "10.0.0.1", 00:12:22.386 "trsvcid": "56064" 00:12:22.386 }, 00:12:22.386 "auth": { 00:12:22.386 "state": "completed", 00:12:22.386 "digest": "sha384", 00:12:22.386 "dhgroup": "ffdhe3072" 00:12:22.386 } 00:12:22.386 } 00:12:22.386 ]' 00:12:22.386 16:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:22.644 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:22.644 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:22.644 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:22.644 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.644 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.644 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.644 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.902 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:12:22.902 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:12:23.469 16:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.469 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:23.469 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.469 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.469 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.469 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.469 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:23.469 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:23.726 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:23.726 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.726 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:23.726 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:23.726 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:23.726 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.726 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.726 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.726 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.726 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.726 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.726 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.726 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.984 00:12:24.243 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.243 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.243 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.243 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.243 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.243 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.243 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.502 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.502 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.502 { 00:12:24.502 "cntlid": 69, 00:12:24.502 "qid": 0, 00:12:24.502 "state": "enabled", 00:12:24.502 "thread": "nvmf_tgt_poll_group_000", 00:12:24.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:24.502 "listen_address": { 00:12:24.502 "trtype": "TCP", 00:12:24.502 "adrfam": "IPv4", 00:12:24.502 "traddr": "10.0.0.3", 00:12:24.502 "trsvcid": "4420" 00:12:24.502 }, 00:12:24.502 "peer_address": { 00:12:24.502 "trtype": "TCP", 00:12:24.502 "adrfam": "IPv4", 00:12:24.502 "traddr": "10.0.0.1", 00:12:24.502 "trsvcid": "56094" 00:12:24.502 }, 00:12:24.502 "auth": { 00:12:24.502 "state": "completed", 00:12:24.502 "digest": "sha384", 00:12:24.502 "dhgroup": "ffdhe3072" 00:12:24.502 } 00:12:24.502 } 00:12:24.502 ]' 00:12:24.502 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.502 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:24.502 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.502 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:24.502 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.502 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.502 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.502 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.760 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:12:24.760 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:12:25.327 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.327 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:25.327 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.327 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.327 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.327 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.327 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:25.327 16:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:25.586 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:25.586 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.586 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:25.586 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:25.586 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:25.586 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.586 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:12:25.586 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.586 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.844 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.844 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:25.844 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:25.844 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:26.103 00:12:26.103 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.103 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.103 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.361 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.361 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.361 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.361 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.361 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.361 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.361 { 00:12:26.361 "cntlid": 71, 00:12:26.361 "qid": 0, 00:12:26.361 "state": "enabled", 00:12:26.361 "thread": "nvmf_tgt_poll_group_000", 00:12:26.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:26.361 "listen_address": { 00:12:26.361 "trtype": "TCP", 00:12:26.361 "adrfam": "IPv4", 00:12:26.361 "traddr": "10.0.0.3", 00:12:26.361 "trsvcid": "4420" 00:12:26.361 }, 00:12:26.361 "peer_address": { 00:12:26.361 "trtype": "TCP", 00:12:26.361 "adrfam": "IPv4", 00:12:26.361 "traddr": "10.0.0.1", 00:12:26.361 "trsvcid": "56126" 00:12:26.361 }, 00:12:26.361 "auth": { 00:12:26.361 "state": "completed", 00:12:26.361 "digest": "sha384", 00:12:26.361 "dhgroup": "ffdhe3072" 00:12:26.361 } 00:12:26.361 } 00:12:26.361 ]' 00:12:26.361 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.361 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:26.361 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.361 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:26.361 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.361 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.361 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.361 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.619 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:12:26.619 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:12:27.559 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.559 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:27.559 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.559 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.559 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.559 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:27.559 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.559 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:27.559 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:27.836 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:27.836 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.836 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:27.836 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:27.836 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:27.836 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.836 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.836 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.836 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.836 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.836 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.836 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.836 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.109 00:12:28.109 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.109 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.109 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.368 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.368 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.368 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.368 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.368 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.368 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.368 { 00:12:28.368 "cntlid": 73, 00:12:28.368 "qid": 0, 00:12:28.368 "state": "enabled", 00:12:28.368 "thread": "nvmf_tgt_poll_group_000", 00:12:28.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:28.368 "listen_address": { 00:12:28.368 "trtype": "TCP", 00:12:28.368 "adrfam": "IPv4", 00:12:28.368 "traddr": "10.0.0.3", 00:12:28.368 "trsvcid": "4420" 00:12:28.368 }, 00:12:28.368 "peer_address": { 00:12:28.368 "trtype": "TCP", 00:12:28.368 "adrfam": "IPv4", 00:12:28.368 "traddr": "10.0.0.1", 00:12:28.368 "trsvcid": "56148" 00:12:28.368 }, 00:12:28.368 "auth": { 00:12:28.368 "state": "completed", 00:12:28.368 "digest": "sha384", 00:12:28.368 "dhgroup": "ffdhe4096" 00:12:28.368 } 00:12:28.368 } 00:12:28.368 ]' 00:12:28.368 16:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.368 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:28.368 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.627 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:28.627 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.627 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.627 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.627 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.886 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:12:28.886 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:12:29.454 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.454 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:29.454 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.454 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.454 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.454 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.454 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:29.454 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:29.713 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:29.713 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.713 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:29.713 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:29.713 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:29.713 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.713 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.713 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.713 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.713 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.713 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.713 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.713 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.280 00:12:30.280 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.280 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.280 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.538 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.538 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.538 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.538 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.538 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.539 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.539 { 00:12:30.539 "cntlid": 75, 00:12:30.539 "qid": 0, 00:12:30.539 "state": "enabled", 00:12:30.539 "thread": "nvmf_tgt_poll_group_000", 00:12:30.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:30.539 "listen_address": { 00:12:30.539 "trtype": "TCP", 00:12:30.539 "adrfam": "IPv4", 00:12:30.539 "traddr": "10.0.0.3", 00:12:30.539 "trsvcid": "4420" 00:12:30.539 }, 00:12:30.539 "peer_address": { 00:12:30.539 "trtype": "TCP", 00:12:30.539 "adrfam": "IPv4", 00:12:30.539 "traddr": "10.0.0.1", 00:12:30.539 "trsvcid": "56844" 00:12:30.539 }, 00:12:30.539 "auth": { 00:12:30.539 "state": "completed", 00:12:30.539 "digest": "sha384", 00:12:30.539 "dhgroup": "ffdhe4096" 00:12:30.539 } 00:12:30.539 } 00:12:30.539 ]' 00:12:30.539 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.539 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:30.539 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.539 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:30.539 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.796 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.796 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.796 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.055 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:12:31.055 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:12:31.621 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.621 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:31.621 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.621 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.621 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.621 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.621 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:31.621 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:31.879 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:31.879 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:31.879 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:31.879 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:31.879 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:31.879 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.879 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.879 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.879 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.879 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.879 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.879 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.879 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.446 00:12:32.446 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.446 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.446 16:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.704 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.704 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.704 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.704 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.704 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.704 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.704 { 00:12:32.704 "cntlid": 77, 00:12:32.704 "qid": 0, 00:12:32.704 "state": "enabled", 00:12:32.704 "thread": "nvmf_tgt_poll_group_000", 00:12:32.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:32.704 "listen_address": { 00:12:32.704 "trtype": "TCP", 00:12:32.704 "adrfam": "IPv4", 00:12:32.704 "traddr": "10.0.0.3", 00:12:32.704 "trsvcid": "4420" 00:12:32.704 }, 00:12:32.704 "peer_address": { 00:12:32.704 "trtype": "TCP", 00:12:32.704 "adrfam": "IPv4", 00:12:32.704 "traddr": "10.0.0.1", 00:12:32.704 "trsvcid": "56864" 00:12:32.704 }, 00:12:32.704 "auth": { 00:12:32.704 "state": "completed", 00:12:32.704 "digest": "sha384", 00:12:32.704 "dhgroup": "ffdhe4096" 00:12:32.704 } 00:12:32.704 } 00:12:32.704 ]' 00:12:32.704 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.704 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:32.704 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.704 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:32.704 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.704 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.704 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.704 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.963 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:12:32.963 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:33.899 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:34.467 00:12:34.467 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.467 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.467 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.467 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.467 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.467 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.467 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.467 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.467 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.467 { 00:12:34.467 "cntlid": 79, 00:12:34.467 "qid": 0, 00:12:34.467 "state": "enabled", 00:12:34.467 "thread": "nvmf_tgt_poll_group_000", 00:12:34.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:34.467 "listen_address": { 00:12:34.467 "trtype": "TCP", 00:12:34.467 "adrfam": "IPv4", 00:12:34.467 "traddr": "10.0.0.3", 00:12:34.467 "trsvcid": "4420" 00:12:34.467 }, 00:12:34.467 "peer_address": { 00:12:34.467 "trtype": "TCP", 00:12:34.467 "adrfam": "IPv4", 00:12:34.467 "traddr": "10.0.0.1", 00:12:34.467 "trsvcid": "56880" 00:12:34.467 }, 00:12:34.467 "auth": { 00:12:34.467 "state": "completed", 00:12:34.467 "digest": "sha384", 00:12:34.467 "dhgroup": "ffdhe4096" 00:12:34.467 } 00:12:34.467 } 00:12:34.467 ]' 00:12:34.467 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.726 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:34.726 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.726 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:34.726 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.726 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.726 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.726 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.985 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:12:34.985 16:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.921 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.488 00:12:36.488 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.488 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.488 16:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.747 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.747 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.747 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.747 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.747 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.747 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.747 { 00:12:36.747 "cntlid": 81, 00:12:36.747 "qid": 0, 00:12:36.747 "state": "enabled", 00:12:36.747 "thread": "nvmf_tgt_poll_group_000", 00:12:36.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:36.747 "listen_address": { 00:12:36.747 "trtype": "TCP", 00:12:36.747 "adrfam": "IPv4", 00:12:36.747 "traddr": "10.0.0.3", 00:12:36.747 "trsvcid": "4420" 00:12:36.747 }, 00:12:36.747 "peer_address": { 00:12:36.747 "trtype": "TCP", 00:12:36.747 "adrfam": "IPv4", 00:12:36.747 "traddr": "10.0.0.1", 00:12:36.747 "trsvcid": "56902" 00:12:36.747 }, 00:12:36.747 "auth": { 00:12:36.747 "state": "completed", 00:12:36.747 "digest": "sha384", 00:12:36.747 "dhgroup": "ffdhe6144" 00:12:36.747 } 00:12:36.747 } 00:12:36.747 ]' 00:12:36.747 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.747 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:36.747 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.747 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:36.747 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.006 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.006 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.006 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.264 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:12:37.264 16:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:12:37.832 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.832 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:37.832 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.832 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.832 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.832 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.832 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:37.832 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:38.091 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:38.091 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.091 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:38.091 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:38.091 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:38.091 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.091 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.091 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.091 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.091 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.091 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.091 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.091 16:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.660 00:12:38.660 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.660 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.660 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.919 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.919 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.919 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.920 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.920 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.920 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.920 { 00:12:38.920 "cntlid": 83, 00:12:38.920 "qid": 0, 00:12:38.920 "state": "enabled", 00:12:38.920 "thread": "nvmf_tgt_poll_group_000", 00:12:38.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:38.920 "listen_address": { 00:12:38.920 "trtype": "TCP", 00:12:38.920 "adrfam": "IPv4", 00:12:38.920 "traddr": "10.0.0.3", 00:12:38.920 "trsvcid": "4420" 00:12:38.920 }, 00:12:38.920 "peer_address": { 00:12:38.920 "trtype": "TCP", 00:12:38.920 "adrfam": "IPv4", 00:12:38.920 "traddr": "10.0.0.1", 00:12:38.920 "trsvcid": "56936" 00:12:38.920 }, 00:12:38.920 "auth": { 00:12:38.920 "state": "completed", 00:12:38.920 "digest": "sha384", 00:12:38.920 "dhgroup": "ffdhe6144" 00:12:38.920 } 00:12:38.920 } 00:12:38.920 ]' 00:12:38.920 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.920 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.920 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.920 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:38.920 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.920 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.920 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.920 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.178 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:12:39.179 16:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:12:39.745 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.745 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:39.745 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.745 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.745 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.745 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.745 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:39.745 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:40.004 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:40.004 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.004 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:40.004 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:40.004 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:40.004 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.004 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.004 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.004 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.004 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.004 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.004 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.004 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.572 00:12:40.572 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.572 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.572 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.830 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.830 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.830 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.830 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.830 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.830 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.830 { 00:12:40.831 "cntlid": 85, 00:12:40.831 "qid": 0, 00:12:40.831 "state": "enabled", 00:12:40.831 "thread": "nvmf_tgt_poll_group_000", 00:12:40.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:40.831 "listen_address": { 00:12:40.831 "trtype": "TCP", 00:12:40.831 "adrfam": "IPv4", 00:12:40.831 "traddr": "10.0.0.3", 00:12:40.831 "trsvcid": "4420" 00:12:40.831 }, 00:12:40.831 "peer_address": { 00:12:40.831 "trtype": "TCP", 00:12:40.831 "adrfam": "IPv4", 00:12:40.831 "traddr": "10.0.0.1", 00:12:40.831 "trsvcid": "43366" 00:12:40.831 }, 00:12:40.831 "auth": { 00:12:40.831 "state": "completed", 00:12:40.831 "digest": "sha384", 00:12:40.831 "dhgroup": "ffdhe6144" 00:12:40.831 } 00:12:40.831 } 00:12:40.831 ]' 00:12:40.831 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.831 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.831 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.831 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:40.831 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.831 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.831 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.831 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.090 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:12:41.090 16:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:12:42.027 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.027 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:42.027 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.027 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.027 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.027 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.027 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:42.027 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:42.286 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:42.286 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.286 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:42.286 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:42.286 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:42.287 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.287 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:12:42.287 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.287 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.287 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.287 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:42.287 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:42.287 16:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:42.594 00:12:42.595 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.595 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.595 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.867 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.867 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.867 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.867 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.867 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.867 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.867 { 00:12:42.867 "cntlid": 87, 00:12:42.867 "qid": 0, 00:12:42.867 "state": "enabled", 00:12:42.867 "thread": "nvmf_tgt_poll_group_000", 00:12:42.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:42.867 "listen_address": { 00:12:42.867 "trtype": "TCP", 00:12:42.867 "adrfam": "IPv4", 00:12:42.867 "traddr": "10.0.0.3", 00:12:42.867 "trsvcid": "4420" 00:12:42.867 }, 00:12:42.867 "peer_address": { 00:12:42.867 "trtype": "TCP", 00:12:42.867 "adrfam": "IPv4", 00:12:42.867 "traddr": "10.0.0.1", 00:12:42.867 "trsvcid": "43388" 00:12:42.867 }, 00:12:42.867 "auth": { 00:12:42.867 "state": "completed", 00:12:42.867 "digest": "sha384", 00:12:42.867 "dhgroup": "ffdhe6144" 00:12:42.867 } 00:12:42.867 } 00:12:42.867 ]' 00:12:42.867 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.867 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:42.867 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.126 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:43.126 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.126 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.126 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.126 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.384 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:12:43.384 16:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:12:43.950 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.950 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:43.950 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.950 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.950 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.950 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:43.950 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.950 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:43.950 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:44.209 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:44.209 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.209 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:44.209 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:44.209 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:44.209 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.209 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.209 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.209 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.209 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.209 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.209 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.210 16:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.777 00:12:44.777 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.777 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.777 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.345 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.345 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.345 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.345 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.345 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.345 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.345 { 00:12:45.345 "cntlid": 89, 00:12:45.345 "qid": 0, 00:12:45.345 "state": "enabled", 00:12:45.345 "thread": "nvmf_tgt_poll_group_000", 00:12:45.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:45.345 "listen_address": { 00:12:45.345 "trtype": "TCP", 00:12:45.345 "adrfam": "IPv4", 00:12:45.345 "traddr": "10.0.0.3", 00:12:45.345 "trsvcid": "4420" 00:12:45.345 }, 00:12:45.345 "peer_address": { 00:12:45.345 "trtype": "TCP", 00:12:45.345 "adrfam": "IPv4", 00:12:45.345 "traddr": "10.0.0.1", 00:12:45.345 "trsvcid": "43416" 00:12:45.345 }, 00:12:45.345 "auth": { 00:12:45.345 "state": "completed", 00:12:45.345 "digest": "sha384", 00:12:45.345 "dhgroup": "ffdhe8192" 00:12:45.345 } 00:12:45.345 } 00:12:45.345 ]' 00:12:45.345 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.345 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:45.345 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.345 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:45.345 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.345 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.345 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.345 16:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.604 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:12:45.604 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:12:46.541 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.541 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:46.541 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.541 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.541 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.541 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.541 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:46.541 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:46.800 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:46.800 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.800 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:46.800 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:46.800 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:46.800 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.800 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.800 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.800 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.800 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.800 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.800 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.800 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.367 00:12:47.368 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.368 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.368 16:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.626 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.626 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.627 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.627 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.627 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.627 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.627 { 00:12:47.627 "cntlid": 91, 00:12:47.627 "qid": 0, 00:12:47.627 "state": "enabled", 00:12:47.627 "thread": "nvmf_tgt_poll_group_000", 00:12:47.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:47.627 "listen_address": { 00:12:47.627 "trtype": "TCP", 00:12:47.627 "adrfam": "IPv4", 00:12:47.627 "traddr": "10.0.0.3", 00:12:47.627 "trsvcid": "4420" 00:12:47.627 }, 00:12:47.627 "peer_address": { 00:12:47.627 "trtype": "TCP", 00:12:47.627 "adrfam": "IPv4", 00:12:47.627 "traddr": "10.0.0.1", 00:12:47.627 "trsvcid": "43440" 00:12:47.627 }, 00:12:47.627 "auth": { 00:12:47.627 "state": "completed", 00:12:47.627 "digest": "sha384", 00:12:47.627 "dhgroup": "ffdhe8192" 00:12:47.627 } 00:12:47.627 } 00:12:47.627 ]' 00:12:47.627 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.886 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:47.886 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.886 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:47.886 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.886 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.886 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.886 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.146 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:12:48.146 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:12:48.714 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.714 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:48.714 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.714 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.714 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.714 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.714 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:48.714 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:48.973 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:48.973 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.973 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:48.973 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:48.973 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:48.973 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.973 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.973 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.973 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.973 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.973 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.973 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.973 16:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.541 00:12:49.800 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.800 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.800 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.059 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.059 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.059 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.059 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.059 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.059 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.059 { 00:12:50.059 "cntlid": 93, 00:12:50.059 "qid": 0, 00:12:50.059 "state": "enabled", 00:12:50.059 "thread": "nvmf_tgt_poll_group_000", 00:12:50.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:50.059 "listen_address": { 00:12:50.059 "trtype": "TCP", 00:12:50.059 "adrfam": "IPv4", 00:12:50.059 "traddr": "10.0.0.3", 00:12:50.059 "trsvcid": "4420" 00:12:50.059 }, 00:12:50.059 "peer_address": { 00:12:50.059 "trtype": "TCP", 00:12:50.059 "adrfam": "IPv4", 00:12:50.059 "traddr": "10.0.0.1", 00:12:50.059 "trsvcid": "43476" 00:12:50.059 }, 00:12:50.059 "auth": { 00:12:50.059 "state": "completed", 00:12:50.059 "digest": "sha384", 00:12:50.059 "dhgroup": "ffdhe8192" 00:12:50.059 } 00:12:50.059 } 00:12:50.059 ]' 00:12:50.059 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.059 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:50.059 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.059 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:50.059 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:50.059 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.059 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.059 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.317 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:12:50.317 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:51.253 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:52.192 00:12:52.192 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.192 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:52.192 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.192 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.192 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.192 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.192 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.192 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.192 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:52.192 { 00:12:52.192 "cntlid": 95, 00:12:52.192 "qid": 0, 00:12:52.192 "state": "enabled", 00:12:52.192 "thread": "nvmf_tgt_poll_group_000", 00:12:52.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:52.192 "listen_address": { 00:12:52.192 "trtype": "TCP", 00:12:52.192 "adrfam": "IPv4", 00:12:52.192 "traddr": "10.0.0.3", 00:12:52.192 "trsvcid": "4420" 00:12:52.192 }, 00:12:52.192 "peer_address": { 00:12:52.192 "trtype": "TCP", 00:12:52.192 "adrfam": "IPv4", 00:12:52.192 "traddr": "10.0.0.1", 00:12:52.192 "trsvcid": "34200" 00:12:52.192 }, 00:12:52.192 "auth": { 00:12:52.192 "state": "completed", 00:12:52.192 "digest": "sha384", 00:12:52.192 "dhgroup": "ffdhe8192" 00:12:52.192 } 00:12:52.192 } 00:12:52.192 ]' 00:12:52.192 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:52.192 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:52.192 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:52.451 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:52.451 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:52.451 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.451 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.451 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.710 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:12:52.710 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:12:53.279 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.539 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:53.539 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.539 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.539 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.539 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:53.539 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:53.539 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.539 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:53.539 16:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:53.798 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:53.798 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.798 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:53.798 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:53.798 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:53.798 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.798 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.798 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.798 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.798 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.798 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.798 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.798 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.057 00:12:54.057 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.057 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.057 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.315 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.315 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.315 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.315 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.315 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.315 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.315 { 00:12:54.315 "cntlid": 97, 00:12:54.315 "qid": 0, 00:12:54.315 "state": "enabled", 00:12:54.315 "thread": "nvmf_tgt_poll_group_000", 00:12:54.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:54.315 "listen_address": { 00:12:54.315 "trtype": "TCP", 00:12:54.315 "adrfam": "IPv4", 00:12:54.315 "traddr": "10.0.0.3", 00:12:54.315 "trsvcid": "4420" 00:12:54.315 }, 00:12:54.315 "peer_address": { 00:12:54.315 "trtype": "TCP", 00:12:54.315 "adrfam": "IPv4", 00:12:54.315 "traddr": "10.0.0.1", 00:12:54.315 "trsvcid": "34220" 00:12:54.315 }, 00:12:54.315 "auth": { 00:12:54.315 "state": "completed", 00:12:54.315 "digest": "sha512", 00:12:54.315 "dhgroup": "null" 00:12:54.315 } 00:12:54.315 } 00:12:54.315 ]' 00:12:54.315 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.573 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:54.573 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.573 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:54.573 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.573 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.573 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.573 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.833 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:12:54.833 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:12:55.453 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.453 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:55.453 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.453 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.453 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.453 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.453 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:55.453 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:55.715 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:55.715 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.715 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:55.715 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:55.715 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:55.715 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.715 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.715 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.715 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.715 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.715 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.715 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.715 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.283 00:12:56.283 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.283 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.283 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.542 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.542 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.542 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.542 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.542 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.542 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.542 { 00:12:56.542 "cntlid": 99, 00:12:56.542 "qid": 0, 00:12:56.542 "state": "enabled", 00:12:56.542 "thread": "nvmf_tgt_poll_group_000", 00:12:56.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:56.542 "listen_address": { 00:12:56.542 "trtype": "TCP", 00:12:56.542 "adrfam": "IPv4", 00:12:56.542 "traddr": "10.0.0.3", 00:12:56.542 "trsvcid": "4420" 00:12:56.542 }, 00:12:56.542 "peer_address": { 00:12:56.542 "trtype": "TCP", 00:12:56.542 "adrfam": "IPv4", 00:12:56.542 "traddr": "10.0.0.1", 00:12:56.542 "trsvcid": "34250" 00:12:56.542 }, 00:12:56.542 "auth": { 00:12:56.543 "state": "completed", 00:12:56.543 "digest": "sha512", 00:12:56.543 "dhgroup": "null" 00:12:56.543 } 00:12:56.543 } 00:12:56.543 ]' 00:12:56.543 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.543 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:56.543 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.543 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:56.543 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.543 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.543 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.543 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.802 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:12:56.802 16:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:12:57.740 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.740 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:57.740 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.740 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.740 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.740 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.740 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:57.740 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:57.999 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:57.999 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.999 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:57.999 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:57.999 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:57.999 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.999 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.999 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.000 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.000 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.000 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.000 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.000 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.259 00:12:58.259 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.259 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.259 16:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.518 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.518 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.518 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.518 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.518 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.518 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.518 { 00:12:58.518 "cntlid": 101, 00:12:58.518 "qid": 0, 00:12:58.518 "state": "enabled", 00:12:58.518 "thread": "nvmf_tgt_poll_group_000", 00:12:58.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:12:58.518 "listen_address": { 00:12:58.518 "trtype": "TCP", 00:12:58.518 "adrfam": "IPv4", 00:12:58.518 "traddr": "10.0.0.3", 00:12:58.518 "trsvcid": "4420" 00:12:58.518 }, 00:12:58.518 "peer_address": { 00:12:58.518 "trtype": "TCP", 00:12:58.518 "adrfam": "IPv4", 00:12:58.518 "traddr": "10.0.0.1", 00:12:58.518 "trsvcid": "34290" 00:12:58.518 }, 00:12:58.518 "auth": { 00:12:58.518 "state": "completed", 00:12:58.518 "digest": "sha512", 00:12:58.518 "dhgroup": "null" 00:12:58.518 } 00:12:58.518 } 00:12:58.518 ]' 00:12:58.518 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.518 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.518 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.777 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:58.777 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.777 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.777 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.777 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.036 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:12:59.036 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:12:59.604 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.604 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:12:59.604 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.604 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.604 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.604 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.604 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:59.605 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:59.864 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:59.864 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.864 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:59.864 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:59.864 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:59.864 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.864 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:12:59.864 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.864 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.864 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.864 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:59.864 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:59.864 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:00.432 00:13:00.432 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.432 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.432 16:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.691 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.691 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.691 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.691 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.691 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.691 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.691 { 00:13:00.691 "cntlid": 103, 00:13:00.691 "qid": 0, 00:13:00.691 "state": "enabled", 00:13:00.691 "thread": "nvmf_tgt_poll_group_000", 00:13:00.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:00.691 "listen_address": { 00:13:00.691 "trtype": "TCP", 00:13:00.691 "adrfam": "IPv4", 00:13:00.691 "traddr": "10.0.0.3", 00:13:00.691 "trsvcid": "4420" 00:13:00.691 }, 00:13:00.691 "peer_address": { 00:13:00.691 "trtype": "TCP", 00:13:00.691 "adrfam": "IPv4", 00:13:00.691 "traddr": "10.0.0.1", 00:13:00.691 "trsvcid": "57720" 00:13:00.691 }, 00:13:00.691 "auth": { 00:13:00.691 "state": "completed", 00:13:00.691 "digest": "sha512", 00:13:00.691 "dhgroup": "null" 00:13:00.691 } 00:13:00.691 } 00:13:00.691 ]' 00:13:00.691 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.950 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.950 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.950 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:00.950 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.950 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.950 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.950 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.208 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:13:01.208 16:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:13:01.776 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.776 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:01.776 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.776 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.776 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.776 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:01.776 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.776 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:01.776 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:02.033 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:13:02.033 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.033 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:02.033 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:02.033 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:02.033 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.033 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.033 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.033 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.033 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.033 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.033 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.033 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.291 00:13:02.291 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.291 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.291 16:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.858 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.858 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.858 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.858 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.858 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.858 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.858 { 00:13:02.858 "cntlid": 105, 00:13:02.858 "qid": 0, 00:13:02.858 "state": "enabled", 00:13:02.858 "thread": "nvmf_tgt_poll_group_000", 00:13:02.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:02.858 "listen_address": { 00:13:02.859 "trtype": "TCP", 00:13:02.859 "adrfam": "IPv4", 00:13:02.859 "traddr": "10.0.0.3", 00:13:02.859 "trsvcid": "4420" 00:13:02.859 }, 00:13:02.859 "peer_address": { 00:13:02.859 "trtype": "TCP", 00:13:02.859 "adrfam": "IPv4", 00:13:02.859 "traddr": "10.0.0.1", 00:13:02.859 "trsvcid": "57742" 00:13:02.859 }, 00:13:02.859 "auth": { 00:13:02.859 "state": "completed", 00:13:02.859 "digest": "sha512", 00:13:02.859 "dhgroup": "ffdhe2048" 00:13:02.859 } 00:13:02.859 } 00:13:02.859 ]' 00:13:02.859 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.859 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.859 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.859 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:02.859 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.859 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.859 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.859 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.118 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:13:03.118 16:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:13:03.686 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.686 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:03.686 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.686 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.686 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.686 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.686 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:03.686 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:03.945 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:13:03.945 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.945 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:03.945 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:03.945 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:03.945 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.945 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.945 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.945 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.945 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.945 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.945 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.945 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.204 00:13:04.204 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.204 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.204 16:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.772 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.772 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.772 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.773 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.773 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.773 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.773 { 00:13:04.773 "cntlid": 107, 00:13:04.773 "qid": 0, 00:13:04.773 "state": "enabled", 00:13:04.773 "thread": "nvmf_tgt_poll_group_000", 00:13:04.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:04.773 "listen_address": { 00:13:04.773 "trtype": "TCP", 00:13:04.773 "adrfam": "IPv4", 00:13:04.773 "traddr": "10.0.0.3", 00:13:04.773 "trsvcid": "4420" 00:13:04.773 }, 00:13:04.773 "peer_address": { 00:13:04.773 "trtype": "TCP", 00:13:04.773 "adrfam": "IPv4", 00:13:04.773 "traddr": "10.0.0.1", 00:13:04.773 "trsvcid": "57762" 00:13:04.773 }, 00:13:04.773 "auth": { 00:13:04.773 "state": "completed", 00:13:04.773 "digest": "sha512", 00:13:04.773 "dhgroup": "ffdhe2048" 00:13:04.773 } 00:13:04.773 } 00:13:04.773 ]' 00:13:04.773 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.773 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.773 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.773 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:04.773 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.773 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.773 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.773 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.032 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:13:05.032 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:13:05.600 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.600 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:05.600 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.600 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.600 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.600 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.600 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:05.600 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:06.169 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:06.169 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.169 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:06.169 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:06.169 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:06.169 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.169 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.169 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.169 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.169 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.169 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.169 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.169 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.429 00:13:06.429 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.429 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.429 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.687 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.687 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.687 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.688 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.688 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.688 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.688 { 00:13:06.688 "cntlid": 109, 00:13:06.688 "qid": 0, 00:13:06.688 "state": "enabled", 00:13:06.688 "thread": "nvmf_tgt_poll_group_000", 00:13:06.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:06.688 "listen_address": { 00:13:06.688 "trtype": "TCP", 00:13:06.688 "adrfam": "IPv4", 00:13:06.688 "traddr": "10.0.0.3", 00:13:06.688 "trsvcid": "4420" 00:13:06.688 }, 00:13:06.688 "peer_address": { 00:13:06.688 "trtype": "TCP", 00:13:06.688 "adrfam": "IPv4", 00:13:06.688 "traddr": "10.0.0.1", 00:13:06.688 "trsvcid": "57796" 00:13:06.688 }, 00:13:06.688 "auth": { 00:13:06.688 "state": "completed", 00:13:06.688 "digest": "sha512", 00:13:06.688 "dhgroup": "ffdhe2048" 00:13:06.688 } 00:13:06.688 } 00:13:06.688 ]' 00:13:06.688 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.688 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.688 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.688 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:06.688 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.688 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.688 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.688 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.946 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:13:06.946 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:07.882 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.450 00:13:08.450 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.450 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.450 16:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.722 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.722 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.722 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.722 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.722 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.722 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.722 { 00:13:08.722 "cntlid": 111, 00:13:08.722 "qid": 0, 00:13:08.722 "state": "enabled", 00:13:08.722 "thread": "nvmf_tgt_poll_group_000", 00:13:08.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:08.722 "listen_address": { 00:13:08.722 "trtype": "TCP", 00:13:08.722 "adrfam": "IPv4", 00:13:08.722 "traddr": "10.0.0.3", 00:13:08.722 "trsvcid": "4420" 00:13:08.722 }, 00:13:08.722 "peer_address": { 00:13:08.722 "trtype": "TCP", 00:13:08.722 "adrfam": "IPv4", 00:13:08.722 "traddr": "10.0.0.1", 00:13:08.722 "trsvcid": "57812" 00:13:08.722 }, 00:13:08.722 "auth": { 00:13:08.722 "state": "completed", 00:13:08.722 "digest": "sha512", 00:13:08.722 "dhgroup": "ffdhe2048" 00:13:08.722 } 00:13:08.722 } 00:13:08.722 ]' 00:13:08.722 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.722 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.722 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.722 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:08.722 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.722 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.722 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.722 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.994 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:13:08.994 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.926 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.183 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.183 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.183 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.183 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.440 00:13:10.440 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.440 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.440 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.698 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.698 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.698 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.698 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.698 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.698 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.698 { 00:13:10.698 "cntlid": 113, 00:13:10.698 "qid": 0, 00:13:10.698 "state": "enabled", 00:13:10.698 "thread": "nvmf_tgt_poll_group_000", 00:13:10.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:10.698 "listen_address": { 00:13:10.698 "trtype": "TCP", 00:13:10.698 "adrfam": "IPv4", 00:13:10.698 "traddr": "10.0.0.3", 00:13:10.698 "trsvcid": "4420" 00:13:10.698 }, 00:13:10.698 "peer_address": { 00:13:10.698 "trtype": "TCP", 00:13:10.698 "adrfam": "IPv4", 00:13:10.698 "traddr": "10.0.0.1", 00:13:10.698 "trsvcid": "43068" 00:13:10.698 }, 00:13:10.698 "auth": { 00:13:10.698 "state": "completed", 00:13:10.698 "digest": "sha512", 00:13:10.698 "dhgroup": "ffdhe3072" 00:13:10.698 } 00:13:10.698 } 00:13:10.698 ]' 00:13:10.698 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.698 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:10.698 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.956 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:10.956 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.956 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.956 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.956 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.214 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:13:11.214 16:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:13:11.778 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.778 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:11.778 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.778 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.778 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.778 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.778 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:11.779 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:12.037 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:12.037 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.037 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:12.037 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:12.037 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:12.037 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.037 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.037 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.037 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.295 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.295 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.295 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.295 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.553 00:13:12.553 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.553 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.553 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.812 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.812 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.812 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.812 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.812 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.812 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.812 { 00:13:12.812 "cntlid": 115, 00:13:12.812 "qid": 0, 00:13:12.812 "state": "enabled", 00:13:12.812 "thread": "nvmf_tgt_poll_group_000", 00:13:12.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:12.812 "listen_address": { 00:13:12.812 "trtype": "TCP", 00:13:12.812 "adrfam": "IPv4", 00:13:12.812 "traddr": "10.0.0.3", 00:13:12.812 "trsvcid": "4420" 00:13:12.812 }, 00:13:12.812 "peer_address": { 00:13:12.812 "trtype": "TCP", 00:13:12.812 "adrfam": "IPv4", 00:13:12.812 "traddr": "10.0.0.1", 00:13:12.812 "trsvcid": "43090" 00:13:12.812 }, 00:13:12.812 "auth": { 00:13:12.812 "state": "completed", 00:13:12.812 "digest": "sha512", 00:13:12.812 "dhgroup": "ffdhe3072" 00:13:12.812 } 00:13:12.812 } 00:13:12.812 ]' 00:13:12.812 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.812 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:13.070 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.070 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:13.070 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.070 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.070 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.070 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.329 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:13:13.329 16:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:13:13.895 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.895 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:13.895 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.895 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.895 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.895 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.895 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:13.895 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:14.461 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:14.461 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.461 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:14.461 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:14.461 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:14.461 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.461 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.461 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.461 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.461 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.461 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.461 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.461 16:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.719 00:13:14.719 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.719 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.719 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.978 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.978 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.978 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.978 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.978 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.978 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.978 { 00:13:14.978 "cntlid": 117, 00:13:14.978 "qid": 0, 00:13:14.978 "state": "enabled", 00:13:14.978 "thread": "nvmf_tgt_poll_group_000", 00:13:14.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:14.978 "listen_address": { 00:13:14.978 "trtype": "TCP", 00:13:14.978 "adrfam": "IPv4", 00:13:14.978 "traddr": "10.0.0.3", 00:13:14.978 "trsvcid": "4420" 00:13:14.978 }, 00:13:14.978 "peer_address": { 00:13:14.978 "trtype": "TCP", 00:13:14.978 "adrfam": "IPv4", 00:13:14.978 "traddr": "10.0.0.1", 00:13:14.978 "trsvcid": "43118" 00:13:14.978 }, 00:13:14.978 "auth": { 00:13:14.978 "state": "completed", 00:13:14.978 "digest": "sha512", 00:13:14.978 "dhgroup": "ffdhe3072" 00:13:14.978 } 00:13:14.978 } 00:13:14.978 ]' 00:13:14.978 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.978 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:14.978 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.237 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:15.237 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.237 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.237 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.237 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.495 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:13:15.495 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:13:16.062 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.062 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:16.062 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.062 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.062 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.062 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.062 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:16.062 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:16.321 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:16.321 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.321 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:16.321 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:16.321 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:16.321 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.321 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:13:16.321 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.321 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.321 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.321 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:16.321 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:16.321 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:16.889 00:13:16.889 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:16.889 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:16.889 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.147 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.147 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.147 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.147 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.147 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.147 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.147 { 00:13:17.147 "cntlid": 119, 00:13:17.147 "qid": 0, 00:13:17.147 "state": "enabled", 00:13:17.147 "thread": "nvmf_tgt_poll_group_000", 00:13:17.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:17.147 "listen_address": { 00:13:17.147 "trtype": "TCP", 00:13:17.147 "adrfam": "IPv4", 00:13:17.147 "traddr": "10.0.0.3", 00:13:17.147 "trsvcid": "4420" 00:13:17.147 }, 00:13:17.147 "peer_address": { 00:13:17.147 "trtype": "TCP", 00:13:17.147 "adrfam": "IPv4", 00:13:17.147 "traddr": "10.0.0.1", 00:13:17.147 "trsvcid": "43150" 00:13:17.147 }, 00:13:17.147 "auth": { 00:13:17.147 "state": "completed", 00:13:17.147 "digest": "sha512", 00:13:17.147 "dhgroup": "ffdhe3072" 00:13:17.147 } 00:13:17.147 } 00:13:17.147 ]' 00:13:17.147 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.147 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.147 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.147 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:17.147 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.147 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.147 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.147 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.406 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:13:17.406 16:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:13:18.342 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.343 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.911 00:13:18.911 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.911 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.911 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.170 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.170 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.170 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.170 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.170 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.170 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.170 { 00:13:19.170 "cntlid": 121, 00:13:19.170 "qid": 0, 00:13:19.170 "state": "enabled", 00:13:19.170 "thread": "nvmf_tgt_poll_group_000", 00:13:19.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:19.170 "listen_address": { 00:13:19.170 "trtype": "TCP", 00:13:19.170 "adrfam": "IPv4", 00:13:19.170 "traddr": "10.0.0.3", 00:13:19.170 "trsvcid": "4420" 00:13:19.170 }, 00:13:19.170 "peer_address": { 00:13:19.170 "trtype": "TCP", 00:13:19.170 "adrfam": "IPv4", 00:13:19.170 "traddr": "10.0.0.1", 00:13:19.170 "trsvcid": "43190" 00:13:19.170 }, 00:13:19.170 "auth": { 00:13:19.170 "state": "completed", 00:13:19.170 "digest": "sha512", 00:13:19.170 "dhgroup": "ffdhe4096" 00:13:19.170 } 00:13:19.170 } 00:13:19.170 ]' 00:13:19.170 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.170 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.170 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.170 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:19.170 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.170 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.170 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.170 16:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.429 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:13:19.429 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:13:19.997 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.997 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:19.997 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.997 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.256 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.256 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.256 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:20.256 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:20.256 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:20.256 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.256 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:20.256 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:20.256 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:20.256 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.256 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.256 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.256 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.516 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.516 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.516 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.516 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.775 00:13:20.775 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.775 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.775 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.033 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.033 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.033 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.034 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.034 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.034 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.034 { 00:13:21.034 "cntlid": 123, 00:13:21.034 "qid": 0, 00:13:21.034 "state": "enabled", 00:13:21.034 "thread": "nvmf_tgt_poll_group_000", 00:13:21.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:21.034 "listen_address": { 00:13:21.034 "trtype": "TCP", 00:13:21.034 "adrfam": "IPv4", 00:13:21.034 "traddr": "10.0.0.3", 00:13:21.034 "trsvcid": "4420" 00:13:21.034 }, 00:13:21.034 "peer_address": { 00:13:21.034 "trtype": "TCP", 00:13:21.034 "adrfam": "IPv4", 00:13:21.034 "traddr": "10.0.0.1", 00:13:21.034 "trsvcid": "38120" 00:13:21.034 }, 00:13:21.034 "auth": { 00:13:21.034 "state": "completed", 00:13:21.034 "digest": "sha512", 00:13:21.034 "dhgroup": "ffdhe4096" 00:13:21.034 } 00:13:21.034 } 00:13:21.034 ]' 00:13:21.034 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.034 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.034 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.034 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:21.034 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.034 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.034 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.034 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.605 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:13:21.605 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:13:22.175 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.175 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:22.175 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.175 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.175 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.175 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.175 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:22.175 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:22.433 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:22.433 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.433 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:22.433 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:22.433 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:22.433 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.433 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.433 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.433 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.433 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.433 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.433 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.433 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.000 00:13:23.000 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.000 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.000 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.000 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.000 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.000 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.000 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.000 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.000 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.000 { 00:13:23.000 "cntlid": 125, 00:13:23.000 "qid": 0, 00:13:23.000 "state": "enabled", 00:13:23.000 "thread": "nvmf_tgt_poll_group_000", 00:13:23.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:23.000 "listen_address": { 00:13:23.000 "trtype": "TCP", 00:13:23.000 "adrfam": "IPv4", 00:13:23.000 "traddr": "10.0.0.3", 00:13:23.000 "trsvcid": "4420" 00:13:23.000 }, 00:13:23.000 "peer_address": { 00:13:23.000 "trtype": "TCP", 00:13:23.000 "adrfam": "IPv4", 00:13:23.000 "traddr": "10.0.0.1", 00:13:23.000 "trsvcid": "38154" 00:13:23.000 }, 00:13:23.000 "auth": { 00:13:23.000 "state": "completed", 00:13:23.000 "digest": "sha512", 00:13:23.000 "dhgroup": "ffdhe4096" 00:13:23.000 } 00:13:23.000 } 00:13:23.000 ]' 00:13:23.000 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.259 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.259 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.259 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:23.259 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.259 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.259 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.259 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.516 16:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:13:23.516 16:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:13:24.082 16:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.082 16:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:24.082 16:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.082 16:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.082 16:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.082 16:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.082 16:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:24.082 16:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:24.649 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:24.649 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.649 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:24.649 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:24.649 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:24.649 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.649 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:13:24.649 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.649 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.649 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.649 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:24.649 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.649 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.906 00:13:24.906 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.906 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.906 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.164 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.164 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.165 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.165 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.165 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.165 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.165 { 00:13:25.165 "cntlid": 127, 00:13:25.165 "qid": 0, 00:13:25.165 "state": "enabled", 00:13:25.165 "thread": "nvmf_tgt_poll_group_000", 00:13:25.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:25.165 "listen_address": { 00:13:25.165 "trtype": "TCP", 00:13:25.165 "adrfam": "IPv4", 00:13:25.165 "traddr": "10.0.0.3", 00:13:25.165 "trsvcid": "4420" 00:13:25.165 }, 00:13:25.165 "peer_address": { 00:13:25.165 "trtype": "TCP", 00:13:25.165 "adrfam": "IPv4", 00:13:25.165 "traddr": "10.0.0.1", 00:13:25.165 "trsvcid": "38186" 00:13:25.165 }, 00:13:25.165 "auth": { 00:13:25.165 "state": "completed", 00:13:25.165 "digest": "sha512", 00:13:25.165 "dhgroup": "ffdhe4096" 00:13:25.165 } 00:13:25.165 } 00:13:25.165 ]' 00:13:25.165 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.165 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:25.165 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.165 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:25.165 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.423 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.423 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.423 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.682 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:13:25.682 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:13:26.248 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.248 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:26.248 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.248 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.248 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.248 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:26.248 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.248 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:26.248 16:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:26.506 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:26.506 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.506 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:26.506 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:26.506 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:26.506 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.506 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.506 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.506 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.506 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.506 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.506 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.506 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.074 00:13:27.074 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.074 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.074 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.333 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.333 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.333 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.333 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.333 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.333 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.333 { 00:13:27.333 "cntlid": 129, 00:13:27.333 "qid": 0, 00:13:27.333 "state": "enabled", 00:13:27.333 "thread": "nvmf_tgt_poll_group_000", 00:13:27.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:27.333 "listen_address": { 00:13:27.333 "trtype": "TCP", 00:13:27.333 "adrfam": "IPv4", 00:13:27.333 "traddr": "10.0.0.3", 00:13:27.333 "trsvcid": "4420" 00:13:27.333 }, 00:13:27.333 "peer_address": { 00:13:27.333 "trtype": "TCP", 00:13:27.333 "adrfam": "IPv4", 00:13:27.333 "traddr": "10.0.0.1", 00:13:27.333 "trsvcid": "38208" 00:13:27.333 }, 00:13:27.333 "auth": { 00:13:27.333 "state": "completed", 00:13:27.333 "digest": "sha512", 00:13:27.333 "dhgroup": "ffdhe6144" 00:13:27.333 } 00:13:27.333 } 00:13:27.333 ]' 00:13:27.333 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.333 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:27.333 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.591 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:27.591 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.591 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.591 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.591 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.850 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:13:27.850 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:13:28.418 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.418 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:28.418 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.419 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.419 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.419 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.419 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:28.419 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:28.677 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:28.677 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.677 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:28.677 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:28.677 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:28.677 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.678 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.678 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.678 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.678 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.678 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.678 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.678 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.245 00:13:29.245 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.246 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.246 16:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.504 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.505 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.505 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.505 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.505 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.505 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.505 { 00:13:29.505 "cntlid": 131, 00:13:29.505 "qid": 0, 00:13:29.505 "state": "enabled", 00:13:29.505 "thread": "nvmf_tgt_poll_group_000", 00:13:29.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:29.505 "listen_address": { 00:13:29.505 "trtype": "TCP", 00:13:29.505 "adrfam": "IPv4", 00:13:29.505 "traddr": "10.0.0.3", 00:13:29.505 "trsvcid": "4420" 00:13:29.505 }, 00:13:29.505 "peer_address": { 00:13:29.505 "trtype": "TCP", 00:13:29.505 "adrfam": "IPv4", 00:13:29.505 "traddr": "10.0.0.1", 00:13:29.505 "trsvcid": "38244" 00:13:29.505 }, 00:13:29.505 "auth": { 00:13:29.505 "state": "completed", 00:13:29.505 "digest": "sha512", 00:13:29.505 "dhgroup": "ffdhe6144" 00:13:29.505 } 00:13:29.505 } 00:13:29.505 ]' 00:13:29.505 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.505 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.505 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.505 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:29.505 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.763 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.764 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.764 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.022 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:13:30.022 16:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:13:30.590 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.590 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:30.590 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.590 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.590 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.590 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.590 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:30.590 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:30.867 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:30.867 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.867 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:30.867 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:30.867 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:30.867 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.867 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.867 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.867 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.867 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.867 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.867 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.867 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.434 00:13:31.434 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.434 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.434 16:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.693 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.693 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.693 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.693 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.693 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.693 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.693 { 00:13:31.693 "cntlid": 133, 00:13:31.693 "qid": 0, 00:13:31.693 "state": "enabled", 00:13:31.693 "thread": "nvmf_tgt_poll_group_000", 00:13:31.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:31.693 "listen_address": { 00:13:31.693 "trtype": "TCP", 00:13:31.693 "adrfam": "IPv4", 00:13:31.693 "traddr": "10.0.0.3", 00:13:31.693 "trsvcid": "4420" 00:13:31.693 }, 00:13:31.693 "peer_address": { 00:13:31.693 "trtype": "TCP", 00:13:31.693 "adrfam": "IPv4", 00:13:31.694 "traddr": "10.0.0.1", 00:13:31.694 "trsvcid": "47140" 00:13:31.694 }, 00:13:31.694 "auth": { 00:13:31.694 "state": "completed", 00:13:31.694 "digest": "sha512", 00:13:31.694 "dhgroup": "ffdhe6144" 00:13:31.694 } 00:13:31.694 } 00:13:31.694 ]' 00:13:31.694 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.694 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.694 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.694 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:31.694 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.952 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.952 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.952 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.211 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:13:32.211 16:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:13:32.777 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.777 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:32.777 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.777 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.777 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.777 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.777 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:32.777 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:33.037 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:33.037 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.037 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:33.037 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:33.037 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:33.037 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.037 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:13:33.037 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.037 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.037 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.037 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:33.037 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:33.037 16:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:33.607 00:13:33.607 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.607 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.607 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.866 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.866 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.866 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.866 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.866 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.866 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.866 { 00:13:33.866 "cntlid": 135, 00:13:33.866 "qid": 0, 00:13:33.866 "state": "enabled", 00:13:33.866 "thread": "nvmf_tgt_poll_group_000", 00:13:33.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:33.866 "listen_address": { 00:13:33.866 "trtype": "TCP", 00:13:33.866 "adrfam": "IPv4", 00:13:33.866 "traddr": "10.0.0.3", 00:13:33.866 "trsvcid": "4420" 00:13:33.866 }, 00:13:33.866 "peer_address": { 00:13:33.866 "trtype": "TCP", 00:13:33.866 "adrfam": "IPv4", 00:13:33.866 "traddr": "10.0.0.1", 00:13:33.866 "trsvcid": "47168" 00:13:33.866 }, 00:13:33.866 "auth": { 00:13:33.866 "state": "completed", 00:13:33.866 "digest": "sha512", 00:13:33.866 "dhgroup": "ffdhe6144" 00:13:33.866 } 00:13:33.866 } 00:13:33.866 ]' 00:13:33.866 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.125 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:34.125 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.125 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:34.126 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.126 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.126 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.126 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.385 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:13:34.385 16:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:13:35.355 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.355 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:35.355 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.355 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.355 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.355 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:35.355 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:35.355 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:35.355 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:35.356 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:35.356 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.356 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:35.356 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:35.356 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:35.356 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.356 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.356 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.356 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.356 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.356 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.356 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.356 16:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.961 00:13:35.961 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.961 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.961 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.220 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.220 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.220 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.220 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.479 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.479 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:36.479 { 00:13:36.479 "cntlid": 137, 00:13:36.479 "qid": 0, 00:13:36.479 "state": "enabled", 00:13:36.479 "thread": "nvmf_tgt_poll_group_000", 00:13:36.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:36.479 "listen_address": { 00:13:36.479 "trtype": "TCP", 00:13:36.479 "adrfam": "IPv4", 00:13:36.479 "traddr": "10.0.0.3", 00:13:36.479 "trsvcid": "4420" 00:13:36.479 }, 00:13:36.479 "peer_address": { 00:13:36.479 "trtype": "TCP", 00:13:36.479 "adrfam": "IPv4", 00:13:36.479 "traddr": "10.0.0.1", 00:13:36.479 "trsvcid": "47196" 00:13:36.479 }, 00:13:36.479 "auth": { 00:13:36.479 "state": "completed", 00:13:36.479 "digest": "sha512", 00:13:36.479 "dhgroup": "ffdhe8192" 00:13:36.479 } 00:13:36.479 } 00:13:36.479 ]' 00:13:36.479 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:36.479 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:36.479 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:36.479 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:36.479 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:36.479 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.479 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.479 16:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.737 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:13:36.737 16:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:13:37.674 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.674 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:37.674 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.674 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.674 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.674 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:37.674 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:37.674 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:37.934 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:37.934 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.934 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:37.934 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:37.934 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:37.934 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.934 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.934 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.934 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.934 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.934 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.934 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.934 16:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.502 00:13:38.502 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:38.502 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.502 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.761 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.761 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.761 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.761 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.761 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.761 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.761 { 00:13:38.761 "cntlid": 139, 00:13:38.761 "qid": 0, 00:13:38.761 "state": "enabled", 00:13:38.761 "thread": "nvmf_tgt_poll_group_000", 00:13:38.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:38.761 "listen_address": { 00:13:38.761 "trtype": "TCP", 00:13:38.761 "adrfam": "IPv4", 00:13:38.761 "traddr": "10.0.0.3", 00:13:38.761 "trsvcid": "4420" 00:13:38.761 }, 00:13:38.761 "peer_address": { 00:13:38.761 "trtype": "TCP", 00:13:38.761 "adrfam": "IPv4", 00:13:38.761 "traddr": "10.0.0.1", 00:13:38.761 "trsvcid": "47220" 00:13:38.761 }, 00:13:38.761 "auth": { 00:13:38.761 "state": "completed", 00:13:38.761 "digest": "sha512", 00:13:38.761 "dhgroup": "ffdhe8192" 00:13:38.761 } 00:13:38.761 } 00:13:38.761 ]' 00:13:38.761 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.020 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:39.020 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.020 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:39.020 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.020 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.020 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.020 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.279 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:13:39.279 16:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: --dhchap-ctrl-secret DHHC-1:02:MzUyZWQ0MzNhNTNiZGY1YzMzNzFhMDRlM2UwZWI5Njc5ZWU0MjkzZDY1YzZjMThk6Gsj2g==: 00:13:40.218 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.218 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:40.218 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.218 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.218 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.218 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.218 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:40.218 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:40.477 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:40.477 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.477 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:40.477 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:40.477 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:40.477 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.477 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.477 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.477 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.477 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.477 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.477 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.477 16:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.044 00:13:41.044 16:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.044 16:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.044 16:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.303 16:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.303 16:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.304 16:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.304 16:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.304 16:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.304 16:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.304 { 00:13:41.304 "cntlid": 141, 00:13:41.304 "qid": 0, 00:13:41.304 "state": "enabled", 00:13:41.304 "thread": "nvmf_tgt_poll_group_000", 00:13:41.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:41.304 "listen_address": { 00:13:41.304 "trtype": "TCP", 00:13:41.304 "adrfam": "IPv4", 00:13:41.304 "traddr": "10.0.0.3", 00:13:41.304 "trsvcid": "4420" 00:13:41.304 }, 00:13:41.304 "peer_address": { 00:13:41.304 "trtype": "TCP", 00:13:41.304 "adrfam": "IPv4", 00:13:41.304 "traddr": "10.0.0.1", 00:13:41.304 "trsvcid": "60310" 00:13:41.304 }, 00:13:41.304 "auth": { 00:13:41.304 "state": "completed", 00:13:41.304 "digest": "sha512", 00:13:41.304 "dhgroup": "ffdhe8192" 00:13:41.304 } 00:13:41.304 } 00:13:41.304 ]' 00:13:41.304 16:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.304 16:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.304 16:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.563 16:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:41.563 16:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.563 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.563 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.563 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:13:41.821 16:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:01:NmM0ZWZmMGIxM2FmOWU1M2JmZTNiNDZjNTY4Njg5ZWJ0PBSQ: 00:13:42.389 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.389 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:42.389 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.389 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.389 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.389 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.389 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:42.389 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:42.957 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:42.957 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.957 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:42.957 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:42.957 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:42.957 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.957 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:13:42.957 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.957 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.957 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.957 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:42.957 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:42.957 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.525 00:13:43.525 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.525 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.525 16:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.785 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.785 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.785 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.785 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.785 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.785 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.785 { 00:13:43.785 "cntlid": 143, 00:13:43.785 "qid": 0, 00:13:43.785 "state": "enabled", 00:13:43.785 "thread": "nvmf_tgt_poll_group_000", 00:13:43.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:43.785 "listen_address": { 00:13:43.785 "trtype": "TCP", 00:13:43.785 "adrfam": "IPv4", 00:13:43.785 "traddr": "10.0.0.3", 00:13:43.785 "trsvcid": "4420" 00:13:43.785 }, 00:13:43.785 "peer_address": { 00:13:43.785 "trtype": "TCP", 00:13:43.785 "adrfam": "IPv4", 00:13:43.785 "traddr": "10.0.0.1", 00:13:43.785 "trsvcid": "60330" 00:13:43.785 }, 00:13:43.785 "auth": { 00:13:43.785 "state": "completed", 00:13:43.785 "digest": "sha512", 00:13:43.785 "dhgroup": "ffdhe8192" 00:13:43.785 } 00:13:43.785 } 00:13:43.785 ]' 00:13:43.785 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.785 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:43.785 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.045 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:44.045 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.045 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.045 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.045 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.305 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:13:44.305 16:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:13:44.872 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.872 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:44.872 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.872 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.872 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.872 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:44.872 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:44.872 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:44.872 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:44.872 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:44.872 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:45.440 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:45.440 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.440 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:45.440 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:45.440 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:45.440 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.440 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.440 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.440 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.440 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.440 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.440 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.440 16:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.008 00:13:46.008 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.008 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.008 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.267 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.267 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.267 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.267 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.267 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.267 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.267 { 00:13:46.267 "cntlid": 145, 00:13:46.267 "qid": 0, 00:13:46.267 "state": "enabled", 00:13:46.267 "thread": "nvmf_tgt_poll_group_000", 00:13:46.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:46.267 "listen_address": { 00:13:46.267 "trtype": "TCP", 00:13:46.267 "adrfam": "IPv4", 00:13:46.267 "traddr": "10.0.0.3", 00:13:46.267 "trsvcid": "4420" 00:13:46.267 }, 00:13:46.267 "peer_address": { 00:13:46.267 "trtype": "TCP", 00:13:46.267 "adrfam": "IPv4", 00:13:46.267 "traddr": "10.0.0.1", 00:13:46.267 "trsvcid": "60358" 00:13:46.267 }, 00:13:46.267 "auth": { 00:13:46.267 "state": "completed", 00:13:46.267 "digest": "sha512", 00:13:46.267 "dhgroup": "ffdhe8192" 00:13:46.267 } 00:13:46.267 } 00:13:46.267 ]' 00:13:46.267 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.267 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:46.267 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.267 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:46.267 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.526 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.526 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.526 16:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.785 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:13:46.785 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:00:M2RmY2RkNTg1NDQ1Y2Q1N2ZiMDE3OWU5OTBkNzNlYzZmMWQ0OWJlYjYyNWZhMGYxeH0Nqg==: --dhchap-ctrl-secret DHHC-1:03:MTA3MjYyOTEzNTI3ODRmNDAzZjI4NDQ1NDQzOGJmYWM4ZGQwMmYzNGE4ZTY0NDE5YmJlMTJhNTRmM2JhNzc4YdM/lBc=: 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:47.353 16:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:47.921 request: 00:13:47.921 { 00:13:47.921 "name": "nvme0", 00:13:47.921 "trtype": "tcp", 00:13:47.921 "traddr": "10.0.0.3", 00:13:47.921 "adrfam": "ipv4", 00:13:47.921 "trsvcid": "4420", 00:13:47.921 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:47.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:47.921 "prchk_reftag": false, 00:13:47.921 "prchk_guard": false, 00:13:47.921 "hdgst": false, 00:13:47.921 "ddgst": false, 00:13:47.921 "dhchap_key": "key2", 00:13:47.921 "allow_unrecognized_csi": false, 00:13:47.921 "method": "bdev_nvme_attach_controller", 00:13:47.921 "req_id": 1 00:13:47.921 } 00:13:47.921 Got JSON-RPC error response 00:13:47.921 response: 00:13:47.921 { 00:13:47.921 "code": -5, 00:13:47.921 "message": "Input/output error" 00:13:47.921 } 00:13:47.921 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:47.921 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:47.921 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:47.921 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:47.921 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:47.921 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.921 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.921 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.921 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.921 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.921 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.921 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.921 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:47.921 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:47.922 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:47.922 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:47.922 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.922 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:47.922 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.922 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:47.922 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:47.922 16:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:48.499 request: 00:13:48.499 { 00:13:48.499 "name": "nvme0", 00:13:48.499 "trtype": "tcp", 00:13:48.499 "traddr": "10.0.0.3", 00:13:48.499 "adrfam": "ipv4", 00:13:48.499 "trsvcid": "4420", 00:13:48.499 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:48.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:48.499 "prchk_reftag": false, 00:13:48.499 "prchk_guard": false, 00:13:48.499 "hdgst": false, 00:13:48.499 "ddgst": false, 00:13:48.499 "dhchap_key": "key1", 00:13:48.499 "dhchap_ctrlr_key": "ckey2", 00:13:48.499 "allow_unrecognized_csi": false, 00:13:48.499 "method": "bdev_nvme_attach_controller", 00:13:48.499 "req_id": 1 00:13:48.499 } 00:13:48.499 Got JSON-RPC error response 00:13:48.499 response: 00:13:48.499 { 00:13:48.499 "code": -5, 00:13:48.499 "message": "Input/output error" 00:13:48.499 } 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.499 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.131 request: 00:13:49.131 { 00:13:49.131 "name": "nvme0", 00:13:49.131 "trtype": "tcp", 00:13:49.131 "traddr": "10.0.0.3", 00:13:49.131 "adrfam": "ipv4", 00:13:49.131 "trsvcid": "4420", 00:13:49.131 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:49.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:49.131 "prchk_reftag": false, 00:13:49.131 "prchk_guard": false, 00:13:49.131 "hdgst": false, 00:13:49.131 "ddgst": false, 00:13:49.131 "dhchap_key": "key1", 00:13:49.131 "dhchap_ctrlr_key": "ckey1", 00:13:49.131 "allow_unrecognized_csi": false, 00:13:49.131 "method": "bdev_nvme_attach_controller", 00:13:49.131 "req_id": 1 00:13:49.131 } 00:13:49.131 Got JSON-RPC error response 00:13:49.131 response: 00:13:49.131 { 00:13:49.131 "code": -5, 00:13:49.131 "message": "Input/output error" 00:13:49.131 } 00:13:49.131 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 79038 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 79038 ']' 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 79038 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79038 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.132 killing process with pid 79038 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79038' 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 79038 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 79038 00:13:49.132 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:49.391 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:49.391 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.391 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.391 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:49.391 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=82113 00:13:49.391 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 82113 00:13:49.391 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 82113 ']' 00:13:49.391 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.391 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.391 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.391 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.391 16:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.391 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.391 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:49.391 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:49.391 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:49.391 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.650 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.650 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:49.650 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 82113 00:13:49.650 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 82113 ']' 00:13:49.650 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.650 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.650 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.650 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.650 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.909 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.909 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:49.909 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:49.909 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.909 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.909 null0 00:13:49.909 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.909 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.efW 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.rlF ]] 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rlF 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.98y 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.lNT ]] 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lNT 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.DnI 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Ckm ]] 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ckm 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Uu1 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.910 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.169 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.169 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:50.169 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:50.169 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.169 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:50.169 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:50.169 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:50.169 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.169 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:13:50.169 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.169 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.169 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.169 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:50.169 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:50.169 16:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:51.107 nvme0n1 00:13:51.107 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.107 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.107 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.367 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.367 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.367 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.367 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.367 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.367 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.367 { 00:13:51.367 "cntlid": 1, 00:13:51.367 "qid": 0, 00:13:51.367 "state": "enabled", 00:13:51.367 "thread": "nvmf_tgt_poll_group_000", 00:13:51.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:51.367 "listen_address": { 00:13:51.367 "trtype": "TCP", 00:13:51.367 "adrfam": "IPv4", 00:13:51.367 "traddr": "10.0.0.3", 00:13:51.367 "trsvcid": "4420" 00:13:51.367 }, 00:13:51.367 "peer_address": { 00:13:51.367 "trtype": "TCP", 00:13:51.367 "adrfam": "IPv4", 00:13:51.367 "traddr": "10.0.0.1", 00:13:51.367 "trsvcid": "59118" 00:13:51.367 }, 00:13:51.367 "auth": { 00:13:51.367 "state": "completed", 00:13:51.367 "digest": "sha512", 00:13:51.367 "dhgroup": "ffdhe8192" 00:13:51.367 } 00:13:51.367 } 00:13:51.367 ]' 00:13:51.367 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.367 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:51.367 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.367 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:51.367 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.367 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.367 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.367 16:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.626 16:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:13:51.626 16:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:13:52.566 16:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.566 16:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:52.566 16:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.566 16:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.566 16:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.566 16:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key3 00:13:52.566 16:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.566 16:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.566 16:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.566 16:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:52.566 16:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:52.566 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:52.566 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:52.566 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:52.566 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:52.566 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.566 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:52.566 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.566 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:52.566 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.566 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.136 request: 00:13:53.136 { 00:13:53.136 "name": "nvme0", 00:13:53.136 "trtype": "tcp", 00:13:53.136 "traddr": "10.0.0.3", 00:13:53.136 "adrfam": "ipv4", 00:13:53.136 "trsvcid": "4420", 00:13:53.136 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:53.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:53.136 "prchk_reftag": false, 00:13:53.136 "prchk_guard": false, 00:13:53.136 "hdgst": false, 00:13:53.136 "ddgst": false, 00:13:53.136 "dhchap_key": "key3", 00:13:53.136 "allow_unrecognized_csi": false, 00:13:53.136 "method": "bdev_nvme_attach_controller", 00:13:53.136 "req_id": 1 00:13:53.137 } 00:13:53.137 Got JSON-RPC error response 00:13:53.137 response: 00:13:53.137 { 00:13:53.137 "code": -5, 00:13:53.137 "message": "Input/output error" 00:13:53.137 } 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.137 16:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.398 request: 00:13:53.398 { 00:13:53.398 "name": "nvme0", 00:13:53.398 "trtype": "tcp", 00:13:53.398 "traddr": "10.0.0.3", 00:13:53.398 "adrfam": "ipv4", 00:13:53.398 "trsvcid": "4420", 00:13:53.398 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:53.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:53.398 "prchk_reftag": false, 00:13:53.398 "prchk_guard": false, 00:13:53.398 "hdgst": false, 00:13:53.398 "ddgst": false, 00:13:53.398 "dhchap_key": "key3", 00:13:53.398 "allow_unrecognized_csi": false, 00:13:53.398 "method": "bdev_nvme_attach_controller", 00:13:53.398 "req_id": 1 00:13:53.398 } 00:13:53.398 Got JSON-RPC error response 00:13:53.398 response: 00:13:53.398 { 00:13:53.398 "code": -5, 00:13:53.398 "message": "Input/output error" 00:13:53.398 } 00:13:53.398 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:53.398 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:53.398 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:53.398 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:53.398 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:53.398 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:53.398 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:53.398 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:53.398 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:53.398 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:53.969 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:54.229 request: 00:13:54.229 { 00:13:54.229 "name": "nvme0", 00:13:54.229 "trtype": "tcp", 00:13:54.229 "traddr": "10.0.0.3", 00:13:54.229 "adrfam": "ipv4", 00:13:54.229 "trsvcid": "4420", 00:13:54.229 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:54.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:54.229 "prchk_reftag": false, 00:13:54.229 "prchk_guard": false, 00:13:54.229 "hdgst": false, 00:13:54.229 "ddgst": false, 00:13:54.229 "dhchap_key": "key0", 00:13:54.229 "dhchap_ctrlr_key": "key1", 00:13:54.229 "allow_unrecognized_csi": false, 00:13:54.229 "method": "bdev_nvme_attach_controller", 00:13:54.229 "req_id": 1 00:13:54.229 } 00:13:54.229 Got JSON-RPC error response 00:13:54.229 response: 00:13:54.229 { 00:13:54.229 "code": -5, 00:13:54.229 "message": "Input/output error" 00:13:54.229 } 00:13:54.229 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:54.229 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:54.229 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:54.229 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:54.229 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:54.229 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:54.229 16:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:54.489 nvme0n1 00:13:54.489 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:54.489 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:54.489 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.749 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.749 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.749 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.009 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 00:13:55.009 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.009 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.009 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.009 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:55.009 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:55.009 16:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:55.945 nvme0n1 00:13:55.945 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:55.945 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:56.205 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.465 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.465 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:56.465 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.465 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.465 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.465 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:56.465 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:56.465 16:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.723 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.723 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:13:56.723 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid 088cee68-288e-4cf6-92d0-e6cd1eb4210a -l 0 --dhchap-secret DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: --dhchap-ctrl-secret DHHC-1:03:ZjIxZmM4ODBhMzYyNjA5NGQ5MzlhMDJhNjY2MGM2ZGY2YzBlODdmZDU1ODM3ODlkYWQzNGYwNDAzMzZiNTk0N45Xxa0=: 00:13:57.292 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:57.292 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:57.292 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:57.292 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:57.292 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:57.292 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:57.292 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:57.292 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.292 16:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.860 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:57.860 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:57.860 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:57.860 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:57.860 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:57.860 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:57.860 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:57.860 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:57.860 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:57.860 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:58.428 request: 00:13:58.428 { 00:13:58.428 "name": "nvme0", 00:13:58.428 "trtype": "tcp", 00:13:58.428 "traddr": "10.0.0.3", 00:13:58.428 "adrfam": "ipv4", 00:13:58.428 "trsvcid": "4420", 00:13:58.428 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:58.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a", 00:13:58.428 "prchk_reftag": false, 00:13:58.428 "prchk_guard": false, 00:13:58.428 "hdgst": false, 00:13:58.428 "ddgst": false, 00:13:58.428 "dhchap_key": "key1", 00:13:58.428 "allow_unrecognized_csi": false, 00:13:58.428 "method": "bdev_nvme_attach_controller", 00:13:58.428 "req_id": 1 00:13:58.428 } 00:13:58.428 Got JSON-RPC error response 00:13:58.428 response: 00:13:58.428 { 00:13:58.428 "code": -5, 00:13:58.428 "message": "Input/output error" 00:13:58.428 } 00:13:58.428 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:58.428 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:58.428 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:58.428 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:58.428 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:58.428 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:58.428 16:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:59.365 nvme0n1 00:13:59.365 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:59.365 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.365 16:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:59.623 16:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.623 16:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.623 16:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.882 16:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:13:59.882 16:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.882 16:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.882 16:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.882 16:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:59.882 16:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:59.882 16:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:00.449 nvme0n1 00:14:00.449 16:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:14:00.449 16:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:14:00.449 16:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.709 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.709 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.709 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.968 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:00.968 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.968 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.968 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.968 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: '' 2s 00:14:00.968 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:00.968 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:00.968 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: 00:14:00.968 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:14:00.968 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:00.968 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:00.968 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: ]] 00:14:00.968 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MGE2NDFiMGIyMDRkODAyNzA4NGVhMmI5NDVjZGEwNWLTZapk: 00:14:00.968 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:14:00.968 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:00.968 16:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key1 --dhchap-ctrlr-key key2 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: 2s 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: ]] 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:M2M4Yjk4OWRiNjVlY2M5NTM5YjJkZmFlNDMzYjM3YzhjOWU3YjcxZWY3NTAyNTk0F6KcRA==: 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:02.899 16:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:05.434 16:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:05.434 16:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:05.434 16:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:05.434 16:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:05.434 16:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:05.434 16:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:05.434 16:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:05.434 16:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.434 16:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:05.434 16:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.434 16:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.434 16:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.434 16:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:05.434 16:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:05.434 16:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:06.003 nvme0n1 00:14:06.003 16:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:06.003 16:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.003 16:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.003 16:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.003 16:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:06.003 16:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:06.939 16:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:06.939 16:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.939 16:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:06.939 16:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.939 16:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:14:06.939 16:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.939 16:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.939 16:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.939 16:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:06.939 16:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:07.198 16:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:07.198 16:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.198 16:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:07.767 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.767 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:07.767 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.767 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.767 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.767 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:07.767 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:07.767 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:07.767 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:07.767 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.767 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:07.767 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.767 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:07.767 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:08.335 request: 00:14:08.335 { 00:14:08.335 "name": "nvme0", 00:14:08.335 "dhchap_key": "key1", 00:14:08.335 "dhchap_ctrlr_key": "key3", 00:14:08.335 "method": "bdev_nvme_set_keys", 00:14:08.335 "req_id": 1 00:14:08.335 } 00:14:08.335 Got JSON-RPC error response 00:14:08.335 response: 00:14:08.335 { 00:14:08.335 "code": -13, 00:14:08.335 "message": "Permission denied" 00:14:08.335 } 00:14:08.335 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:08.335 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:08.335 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:08.335 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:08.335 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:08.335 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.335 16:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:08.595 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:08.595 16:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:09.973 16:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:09.973 16:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:09.973 16:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.973 16:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:09.973 16:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:09.973 16:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.973 16:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.973 16:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.973 16:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:09.973 16:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:09.973 16:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:10.909 nvme0n1 00:14:10.909 16:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:10.909 16:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.909 16:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.909 16:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.909 16:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:10.909 16:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:10.909 16:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:10.909 16:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:10.909 16:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.909 16:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:10.909 16:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:10.909 16:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:10.909 16:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:11.846 request: 00:14:11.846 { 00:14:11.846 "name": "nvme0", 00:14:11.846 "dhchap_key": "key2", 00:14:11.846 "dhchap_ctrlr_key": "key0", 00:14:11.846 "method": "bdev_nvme_set_keys", 00:14:11.846 "req_id": 1 00:14:11.846 } 00:14:11.846 Got JSON-RPC error response 00:14:11.846 response: 00:14:11.846 { 00:14:11.846 "code": -13, 00:14:11.846 "message": "Permission denied" 00:14:11.846 } 00:14:11.846 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:11.846 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:11.846 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:11.846 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:11.846 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:11.846 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.846 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:11.846 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:11.846 16:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:13.221 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:13.221 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:13.221 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.221 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:13.221 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:13.221 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:13.221 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 79063 00:14:13.221 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 79063 ']' 00:14:13.221 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 79063 00:14:13.480 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:13.480 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.480 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79063 00:14:13.480 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:13.480 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:13.480 killing process with pid 79063 00:14:13.480 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79063' 00:14:13.480 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 79063 00:14:13.480 16:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 79063 00:14:13.757 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:13.758 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:13.758 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:13.758 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:13.758 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:13.758 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:13.758 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:13.758 rmmod nvme_tcp 00:14:13.758 rmmod nvme_fabrics 00:14:13.758 rmmod nvme_keyring 00:14:13.758 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:13.758 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:13.758 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:13.758 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 82113 ']' 00:14:13.758 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 82113 00:14:13.758 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 82113 ']' 00:14:13.758 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 82113 00:14:13.758 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:13.758 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.758 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82113 00:14:13.759 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:13.759 killing process with pid 82113 00:14:13.759 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:13.759 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82113' 00:14:13.759 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 82113 00:14:13.759 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 82113 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:14.018 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.efW /tmp/spdk.key-sha256.98y /tmp/spdk.key-sha384.DnI /tmp/spdk.key-sha512.Uu1 /tmp/spdk.key-sha512.rlF /tmp/spdk.key-sha384.lNT /tmp/spdk.key-sha256.Ckm '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:14.018 00:14:14.018 real 3m9.993s 00:14:14.018 user 7m35.628s 00:14:14.018 sys 0m28.435s 00:14:14.277 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.277 ************************************ 00:14:14.277 END TEST nvmf_auth_target 00:14:14.277 ************************************ 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:14.278 ************************************ 00:14:14.278 START TEST nvmf_bdevio_no_huge 00:14:14.278 ************************************ 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:14.278 * Looking for test storage... 00:14:14.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.278 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:14.538 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:14.538 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:14.538 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:14.538 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:14.538 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.538 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:14.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.538 --rc genhtml_branch_coverage=1 00:14:14.538 --rc genhtml_function_coverage=1 00:14:14.538 --rc genhtml_legend=1 00:14:14.538 --rc geninfo_all_blocks=1 00:14:14.538 --rc geninfo_unexecuted_blocks=1 00:14:14.538 00:14:14.538 ' 00:14:14.538 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:14.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.539 --rc genhtml_branch_coverage=1 00:14:14.539 --rc genhtml_function_coverage=1 00:14:14.539 --rc genhtml_legend=1 00:14:14.539 --rc geninfo_all_blocks=1 00:14:14.539 --rc geninfo_unexecuted_blocks=1 00:14:14.539 00:14:14.539 ' 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:14.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.539 --rc genhtml_branch_coverage=1 00:14:14.539 --rc genhtml_function_coverage=1 00:14:14.539 --rc genhtml_legend=1 00:14:14.539 --rc geninfo_all_blocks=1 00:14:14.539 --rc geninfo_unexecuted_blocks=1 00:14:14.539 00:14:14.539 ' 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:14.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.539 --rc genhtml_branch_coverage=1 00:14:14.539 --rc genhtml_function_coverage=1 00:14:14.539 --rc genhtml_legend=1 00:14:14.539 --rc geninfo_all_blocks=1 00:14:14.539 --rc geninfo_unexecuted_blocks=1 00:14:14.539 00:14:14.539 ' 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:14.539 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:14.539 Cannot find device "nvmf_init_br" 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:14.539 Cannot find device "nvmf_init_br2" 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:14.539 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:14.539 Cannot find device "nvmf_tgt_br" 00:14:14.540 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:14.540 16:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:14.540 Cannot find device "nvmf_tgt_br2" 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:14.540 Cannot find device "nvmf_init_br" 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:14.540 Cannot find device "nvmf_init_br2" 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:14.540 Cannot find device "nvmf_tgt_br" 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:14.540 Cannot find device "nvmf_tgt_br2" 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:14.540 Cannot find device "nvmf_br" 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:14.540 Cannot find device "nvmf_init_if" 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:14.540 Cannot find device "nvmf_init_if2" 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:14.540 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:14.540 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:14.540 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:14.800 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:14.800 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:14:14.800 00:14:14.800 --- 10.0.0.3 ping statistics --- 00:14:14.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.800 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:14.800 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:14.800 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:14:14.800 00:14:14.800 --- 10.0.0.4 ping statistics --- 00:14:14.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.800 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:14.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:14.800 00:14:14.800 --- 10.0.0.1 ping statistics --- 00:14:14.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.800 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:14.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:14:14.800 00:14:14.800 --- 10.0.0.2 ping statistics --- 00:14:14.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.800 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:14.800 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=82757 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 82757 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 82757 ']' 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.801 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:14.801 [2024-11-26 16:19:40.414519] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:14:14.801 [2024-11-26 16:19:40.414640] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:15.060 [2024-11-26 16:19:40.574406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:15.060 [2024-11-26 16:19:40.635815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.060 [2024-11-26 16:19:40.635890] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.060 [2024-11-26 16:19:40.635914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.060 [2024-11-26 16:19:40.635923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.060 [2024-11-26 16:19:40.635932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.060 [2024-11-26 16:19:40.636864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:15.060 [2024-11-26 16:19:40.637002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:15.060 [2024-11-26 16:19:40.637152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.060 [2024-11-26 16:19:40.637152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:15.060 [2024-11-26 16:19:40.643171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:15.320 [2024-11-26 16:19:40.825877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:15.320 Malloc0 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:15.320 [2024-11-26 16:19:40.870200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:15.320 { 00:14:15.320 "params": { 00:14:15.320 "name": "Nvme$subsystem", 00:14:15.320 "trtype": "$TEST_TRANSPORT", 00:14:15.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:15.320 "adrfam": "ipv4", 00:14:15.320 "trsvcid": "$NVMF_PORT", 00:14:15.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:15.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:15.320 "hdgst": ${hdgst:-false}, 00:14:15.320 "ddgst": ${ddgst:-false} 00:14:15.320 }, 00:14:15.320 "method": "bdev_nvme_attach_controller" 00:14:15.320 } 00:14:15.320 EOF 00:14:15.320 )") 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:14:15.320 16:19:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:15.320 "params": { 00:14:15.320 "name": "Nvme1", 00:14:15.320 "trtype": "tcp", 00:14:15.320 "traddr": "10.0.0.3", 00:14:15.320 "adrfam": "ipv4", 00:14:15.320 "trsvcid": "4420", 00:14:15.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.320 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:15.320 "hdgst": false, 00:14:15.320 "ddgst": false 00:14:15.320 }, 00:14:15.320 "method": "bdev_nvme_attach_controller" 00:14:15.320 }' 00:14:15.320 [2024-11-26 16:19:40.927019] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:14:15.320 [2024-11-26 16:19:40.927574] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82790 ] 00:14:15.578 [2024-11-26 16:19:41.083563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:15.578 [2024-11-26 16:19:41.136336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.579 [2024-11-26 16:19:41.136495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.579 [2024-11-26 16:19:41.136497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.579 [2024-11-26 16:19:41.150489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:15.838 I/O targets: 00:14:15.838 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:15.838 00:14:15.838 00:14:15.838 CUnit - A unit testing framework for C - Version 2.1-3 00:14:15.838 http://cunit.sourceforge.net/ 00:14:15.838 00:14:15.838 00:14:15.838 Suite: bdevio tests on: Nvme1n1 00:14:15.838 Test: blockdev write read block ...passed 00:14:15.838 Test: blockdev write zeroes read block ...passed 00:14:15.838 Test: blockdev write zeroes read no split ...passed 00:14:15.838 Test: blockdev write zeroes read split ...passed 00:14:15.838 Test: blockdev write zeroes read split partial ...passed 00:14:15.838 Test: blockdev reset ...[2024-11-26 16:19:41.379935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:15.838 [2024-11-26 16:19:41.380060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1489f00 (9): Bad file descriptor 00:14:15.838 [2024-11-26 16:19:41.398478] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:15.838 passed 00:14:15.838 Test: blockdev write read 8 blocks ...passed 00:14:15.838 Test: blockdev write read size > 128k ...passed 00:14:15.838 Test: blockdev write read invalid size ...passed 00:14:15.838 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:15.838 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:15.838 Test: blockdev write read max offset ...passed 00:14:15.838 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:15.838 Test: blockdev writev readv 8 blocks ...passed 00:14:15.838 Test: blockdev writev readv 30 x 1block ...passed 00:14:15.838 Test: blockdev writev readv block ...passed 00:14:15.838 Test: blockdev writev readv size > 128k ...passed 00:14:15.838 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:15.838 Test: blockdev comparev and writev ...[2024-11-26 16:19:41.408888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:15.838 [2024-11-26 16:19:41.409109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:15.838 [2024-11-26 16:19:41.409286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:15.838 [2024-11-26 16:19:41.409420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:15.838 [2024-11-26 16:19:41.409774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:15.838 [2024-11-26 16:19:41.409808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:15.838 [2024-11-26 16:19:41.409831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:15.838 [2024-11-26 16:19:41.409845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:15.838 [2024-11-26 16:19:41.410133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:15.838 [2024-11-26 16:19:41.410160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:15.838 [2024-11-26 16:19:41.410181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:15.838 [2024-11-26 16:19:41.410194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:15.838 passed 00:14:15.838 Test: blockdev nvme passthru rw ...[2024-11-26 16:19:41.410673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:15.838 [2024-11-26 16:19:41.410699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:15.838 [2024-11-26 16:19:41.410721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:15.838 [2024-11-26 16:19:41.410733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:15.838 passed 00:14:15.838 Test: blockdev nvme passthru vendor specific ...[2024-11-26 16:19:41.411621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:15.838 [2024-11-26 16:19:41.411651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:15.838 passed 00:14:15.838 Test: blockdev nvme admin passthru ...[2024-11-26 16:19:41.411775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:15.838 [2024-11-26 16:19:41.411801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:15.838 [2024-11-26 16:19:41.411919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:15.838 [2024-11-26 16:19:41.411938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:15.838 [2024-11-26 16:19:41.412051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:15.838 [2024-11-26 16:19:41.412079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:15.838 passed 00:14:15.838 Test: blockdev copy ...passed 00:14:15.838 00:14:15.838 Run Summary: Type Total Ran Passed Failed Inactive 00:14:15.838 suites 1 1 n/a 0 0 00:14:15.838 tests 23 23 23 0 0 00:14:15.838 asserts 152 152 152 0 n/a 00:14:15.838 00:14:15.838 Elapsed time = 0.177 seconds 00:14:16.097 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.097 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.097 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:16.097 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.097 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:16.097 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:16.097 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:16.097 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:16.356 rmmod nvme_tcp 00:14:16.356 rmmod nvme_fabrics 00:14:16.356 rmmod nvme_keyring 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 82757 ']' 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 82757 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 82757 ']' 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 82757 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82757 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82757' 00:14:16.356 killing process with pid 82757 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 82757 00:14:16.356 16:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 82757 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:16.615 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:16.874 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:16.874 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:16.874 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:16.874 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:16.874 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:16.874 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.874 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.874 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.874 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:16.874 00:14:16.874 real 0m2.680s 00:14:16.874 user 0m7.229s 00:14:16.874 sys 0m1.259s 00:14:16.874 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.874 ************************************ 00:14:16.874 END TEST nvmf_bdevio_no_huge 00:14:16.874 ************************************ 00:14:16.874 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:16.874 16:19:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:16.874 16:19:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:16.874 16:19:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.874 16:19:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:16.874 ************************************ 00:14:16.874 START TEST nvmf_tls 00:14:16.874 ************************************ 00:14:16.874 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:17.133 * Looking for test storage... 00:14:17.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:17.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.133 --rc genhtml_branch_coverage=1 00:14:17.133 --rc genhtml_function_coverage=1 00:14:17.133 --rc genhtml_legend=1 00:14:17.133 --rc geninfo_all_blocks=1 00:14:17.133 --rc geninfo_unexecuted_blocks=1 00:14:17.133 00:14:17.133 ' 00:14:17.133 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:17.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.133 --rc genhtml_branch_coverage=1 00:14:17.133 --rc genhtml_function_coverage=1 00:14:17.133 --rc genhtml_legend=1 00:14:17.133 --rc geninfo_all_blocks=1 00:14:17.133 --rc geninfo_unexecuted_blocks=1 00:14:17.133 00:14:17.133 ' 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:17.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.134 --rc genhtml_branch_coverage=1 00:14:17.134 --rc genhtml_function_coverage=1 00:14:17.134 --rc genhtml_legend=1 00:14:17.134 --rc geninfo_all_blocks=1 00:14:17.134 --rc geninfo_unexecuted_blocks=1 00:14:17.134 00:14:17.134 ' 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:17.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.134 --rc genhtml_branch_coverage=1 00:14:17.134 --rc genhtml_function_coverage=1 00:14:17.134 --rc genhtml_legend=1 00:14:17.134 --rc geninfo_all_blocks=1 00:14:17.134 --rc geninfo_unexecuted_blocks=1 00:14:17.134 00:14:17.134 ' 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:17.134 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:17.134 Cannot find device "nvmf_init_br" 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:17.134 Cannot find device "nvmf_init_br2" 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:17.134 Cannot find device "nvmf_tgt_br" 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:17.134 Cannot find device "nvmf_tgt_br2" 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:17.134 Cannot find device "nvmf_init_br" 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:17.134 Cannot find device "nvmf_init_br2" 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:17.134 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:17.134 Cannot find device "nvmf_tgt_br" 00:14:17.135 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:17.135 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:17.135 Cannot find device "nvmf_tgt_br2" 00:14:17.135 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:17.135 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:17.135 Cannot find device "nvmf_br" 00:14:17.135 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:17.135 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:17.135 Cannot find device "nvmf_init_if" 00:14:17.135 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:17.135 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:17.394 Cannot find device "nvmf_init_if2" 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:17.394 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:17.394 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:17.394 16:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:17.395 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:17.395 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:17.395 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:17.395 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:17.667 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:17.667 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:14:17.667 00:14:17.667 --- 10.0.0.3 ping statistics --- 00:14:17.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.667 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:17.667 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:17.667 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:14:17.667 00:14:17.667 --- 10.0.0.4 ping statistics --- 00:14:17.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.667 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:17.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:17.667 00:14:17.667 --- 10.0.0.1 ping statistics --- 00:14:17.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.667 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:17.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:14:17.667 00:14:17.667 --- 10.0.0.2 ping statistics --- 00:14:17.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.667 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83021 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83021 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83021 ']' 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.667 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.667 [2024-11-26 16:19:43.159822] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:14:17.667 [2024-11-26 16:19:43.159932] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.936 [2024-11-26 16:19:43.315570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.936 [2024-11-26 16:19:43.339569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.936 [2024-11-26 16:19:43.339637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.936 [2024-11-26 16:19:43.339662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.936 [2024-11-26 16:19:43.339681] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.936 [2024-11-26 16:19:43.339690] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.936 [2024-11-26 16:19:43.340037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.936 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.936 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:17.936 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:17.936 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:17.936 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.936 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.936 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:17.936 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:18.196 true 00:14:18.196 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:18.196 16:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:18.455 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:18.455 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:18.455 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:18.714 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:18.714 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:18.972 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:18.972 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:18.972 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:19.231 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:19.231 16:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:19.800 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:19.800 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:19.800 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:19.800 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:19.800 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:19.800 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:19.800 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:20.367 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:20.367 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:20.367 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:20.367 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:20.367 16:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:20.627 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:20.627 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:20.887 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:21.147 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:21.147 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:21.147 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.dp17HvKQ1X 00:14:21.147 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:21.147 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.uKqeJnuq6N 00:14:21.147 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:21.147 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:21.147 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.dp17HvKQ1X 00:14:21.147 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.uKqeJnuq6N 00:14:21.147 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:21.406 16:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:21.666 [2024-11-26 16:19:47.152671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:21.666 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.dp17HvKQ1X 00:14:21.666 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dp17HvKQ1X 00:14:21.666 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:21.925 [2024-11-26 16:19:47.394485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.925 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:22.184 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:22.443 [2024-11-26 16:19:47.918617] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:22.443 [2024-11-26 16:19:47.918903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:22.443 16:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:22.703 malloc0 00:14:22.703 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:22.962 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dp17HvKQ1X 00:14:23.221 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:23.480 16:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.dp17HvKQ1X 00:14:33.455 Initializing NVMe Controllers 00:14:33.455 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:33.455 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:33.455 Initialization complete. Launching workers. 00:14:33.455 ======================================================== 00:14:33.455 Latency(us) 00:14:33.456 Device Information : IOPS MiB/s Average min max 00:14:33.456 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9336.38 36.47 6856.13 1716.59 15220.38 00:14:33.456 ======================================================== 00:14:33.456 Total : 9336.38 36.47 6856.13 1716.59 15220.38 00:14:33.456 00:14:33.456 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dp17HvKQ1X 00:14:33.456 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:33.456 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:33.456 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:33.456 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dp17HvKQ1X 00:14:33.456 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:33.456 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83247 00:14:33.456 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:33.456 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:33.456 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83247 /var/tmp/bdevperf.sock 00:14:33.456 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83247 ']' 00:14:33.456 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:33.456 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:33.456 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:33.456 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.456 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.713 [2024-11-26 16:19:59.140763] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:14:33.713 [2024-11-26 16:19:59.140898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83247 ] 00:14:33.713 [2024-11-26 16:19:59.285278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.713 [2024-11-26 16:19:59.305350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.713 [2024-11-26 16:19:59.334800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:33.970 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.970 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:33.971 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dp17HvKQ1X 00:14:34.231 16:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:34.499 [2024-11-26 16:20:00.034891] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:34.499 TLSTESTn1 00:14:34.499 16:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:34.767 Running I/O for 10 seconds... 00:14:36.639 4204.00 IOPS, 16.42 MiB/s [2024-11-26T16:20:03.670Z] 4225.00 IOPS, 16.50 MiB/s [2024-11-26T16:20:04.606Z] 4262.00 IOPS, 16.65 MiB/s [2024-11-26T16:20:05.542Z] 4258.00 IOPS, 16.63 MiB/s [2024-11-26T16:20:06.477Z] 4261.60 IOPS, 16.65 MiB/s [2024-11-26T16:20:07.412Z] 4117.17 IOPS, 16.08 MiB/s [2024-11-26T16:20:08.346Z] 4129.71 IOPS, 16.13 MiB/s [2024-11-26T16:20:09.280Z] 4151.62 IOPS, 16.22 MiB/s [2024-11-26T16:20:10.657Z] 4152.33 IOPS, 16.22 MiB/s [2024-11-26T16:20:10.657Z] 4168.80 IOPS, 16.28 MiB/s 00:14:45.004 Latency(us) 00:14:45.004 [2024-11-26T16:20:10.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.004 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:45.004 Verification LBA range: start 0x0 length 0x2000 00:14:45.004 TLSTESTn1 : 10.02 4175.08 16.31 0.00 0.00 30604.46 5689.72 25499.46 00:14:45.004 [2024-11-26T16:20:10.657Z] =================================================================================================================== 00:14:45.004 [2024-11-26T16:20:10.657Z] Total : 4175.08 16.31 0.00 0.00 30604.46 5689.72 25499.46 00:14:45.004 { 00:14:45.004 "results": [ 00:14:45.004 { 00:14:45.004 "job": "TLSTESTn1", 00:14:45.004 "core_mask": "0x4", 00:14:45.004 "workload": "verify", 00:14:45.004 "status": "finished", 00:14:45.004 "verify_range": { 00:14:45.004 "start": 0, 00:14:45.004 "length": 8192 00:14:45.004 }, 00:14:45.004 "queue_depth": 128, 00:14:45.004 "io_size": 4096, 00:14:45.004 "runtime": 10.015145, 00:14:45.004 "iops": 4175.076846116556, 00:14:45.004 "mibps": 16.308893930142798, 00:14:45.004 "io_failed": 0, 00:14:45.004 "io_timeout": 0, 00:14:45.004 "avg_latency_us": 30604.462012809974, 00:14:45.004 "min_latency_us": 5689.716363636364, 00:14:45.004 "max_latency_us": 25499.46181818182 00:14:45.004 } 00:14:45.004 ], 00:14:45.004 "core_count": 1 00:14:45.004 } 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83247 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83247 ']' 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83247 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83247 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83247' 00:14:45.004 killing process with pid 83247 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83247 00:14:45.004 Received shutdown signal, test time was about 10.000000 seconds 00:14:45.004 00:14:45.004 Latency(us) 00:14:45.004 [2024-11-26T16:20:10.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.004 [2024-11-26T16:20:10.657Z] =================================================================================================================== 00:14:45.004 [2024-11-26T16:20:10.657Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83247 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uKqeJnuq6N 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uKqeJnuq6N 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uKqeJnuq6N 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:45.004 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:45.005 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.uKqeJnuq6N 00:14:45.005 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:45.005 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:45.005 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83380 00:14:45.005 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:45.005 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83380 /var/tmp/bdevperf.sock 00:14:45.005 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83380 ']' 00:14:45.005 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:45.005 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:45.005 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:45.005 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.005 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.005 [2024-11-26 16:20:10.521188] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:14:45.005 [2024-11-26 16:20:10.521287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83380 ] 00:14:45.266 [2024-11-26 16:20:10.659689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.266 [2024-11-26 16:20:10.680306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.266 [2024-11-26 16:20:10.709689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.266 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.266 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:45.266 16:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uKqeJnuq6N 00:14:45.526 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:45.785 [2024-11-26 16:20:11.309563] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:45.785 [2024-11-26 16:20:11.314518] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:45.785 [2024-11-26 16:20:11.315154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b3a30 (107): Transport endpoint is not connected 00:14:45.785 [2024-11-26 16:20:11.316141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b3a30 (9): Bad file descriptor 00:14:45.785 [2024-11-26 16:20:11.317152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:45.785 [2024-11-26 16:20:11.317172] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:45.785 [2024-11-26 16:20:11.317197] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:45.785 [2024-11-26 16:20:11.317210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:45.785 request: 00:14:45.785 { 00:14:45.785 "name": "TLSTEST", 00:14:45.785 "trtype": "tcp", 00:14:45.785 "traddr": "10.0.0.3", 00:14:45.785 "adrfam": "ipv4", 00:14:45.785 "trsvcid": "4420", 00:14:45.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:45.785 "prchk_reftag": false, 00:14:45.785 "prchk_guard": false, 00:14:45.785 "hdgst": false, 00:14:45.785 "ddgst": false, 00:14:45.785 "psk": "key0", 00:14:45.785 "allow_unrecognized_csi": false, 00:14:45.785 "method": "bdev_nvme_attach_controller", 00:14:45.785 "req_id": 1 00:14:45.785 } 00:14:45.785 Got JSON-RPC error response 00:14:45.785 response: 00:14:45.785 { 00:14:45.785 "code": -5, 00:14:45.785 "message": "Input/output error" 00:14:45.785 } 00:14:45.785 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83380 00:14:45.785 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83380 ']' 00:14:45.785 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83380 00:14:45.785 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:45.785 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.785 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83380 00:14:45.785 killing process with pid 83380 00:14:45.785 Received shutdown signal, test time was about 10.000000 seconds 00:14:45.785 00:14:45.785 Latency(us) 00:14:45.785 [2024-11-26T16:20:11.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.785 [2024-11-26T16:20:11.438Z] =================================================================================================================== 00:14:45.785 [2024-11-26T16:20:11.438Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:45.785 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:45.785 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:45.785 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83380' 00:14:45.785 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83380 00:14:45.785 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83380 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dp17HvKQ1X 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dp17HvKQ1X 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:46.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dp17HvKQ1X 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dp17HvKQ1X 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83401 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83401 /var/tmp/bdevperf.sock 00:14:46.044 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83401 ']' 00:14:46.045 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:46.045 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.045 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:46.045 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.045 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.045 [2024-11-26 16:20:11.559081] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:14:46.045 [2024-11-26 16:20:11.559212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83401 ] 00:14:46.303 [2024-11-26 16:20:11.702728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.304 [2024-11-26 16:20:11.723623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.304 [2024-11-26 16:20:11.753473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.304 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.304 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:46.304 16:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dp17HvKQ1X 00:14:46.562 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:46.822 [2024-11-26 16:20:12.309675] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:46.822 [2024-11-26 16:20:12.314633] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:46.822 [2024-11-26 16:20:12.314688] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:46.822 [2024-11-26 16:20:12.314766] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:46.822 [2024-11-26 16:20:12.315395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d8a30 (107): Transport endpoint is not connected 00:14:46.822 [2024-11-26 16:20:12.316384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d8a30 (9): Bad file descriptor 00:14:46.822 [2024-11-26 16:20:12.317380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:46.822 [2024-11-26 16:20:12.317423] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:46.822 [2024-11-26 16:20:12.317434] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:46.822 [2024-11-26 16:20:12.317448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:46.822 request: 00:14:46.822 { 00:14:46.822 "name": "TLSTEST", 00:14:46.822 "trtype": "tcp", 00:14:46.822 "traddr": "10.0.0.3", 00:14:46.822 "adrfam": "ipv4", 00:14:46.822 "trsvcid": "4420", 00:14:46.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.822 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:46.822 "prchk_reftag": false, 00:14:46.822 "prchk_guard": false, 00:14:46.822 "hdgst": false, 00:14:46.822 "ddgst": false, 00:14:46.822 "psk": "key0", 00:14:46.822 "allow_unrecognized_csi": false, 00:14:46.822 "method": "bdev_nvme_attach_controller", 00:14:46.822 "req_id": 1 00:14:46.822 } 00:14:46.822 Got JSON-RPC error response 00:14:46.822 response: 00:14:46.822 { 00:14:46.822 "code": -5, 00:14:46.822 "message": "Input/output error" 00:14:46.822 } 00:14:46.822 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83401 00:14:46.822 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83401 ']' 00:14:46.822 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83401 00:14:46.822 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:46.822 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.822 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83401 00:14:46.822 killing process with pid 83401 00:14:46.822 Received shutdown signal, test time was about 10.000000 seconds 00:14:46.822 00:14:46.822 Latency(us) 00:14:46.822 [2024-11-26T16:20:12.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.822 [2024-11-26T16:20:12.475Z] =================================================================================================================== 00:14:46.822 [2024-11-26T16:20:12.475Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:46.822 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:46.822 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:46.822 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83401' 00:14:46.822 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83401 00:14:46.822 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83401 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dp17HvKQ1X 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dp17HvKQ1X 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dp17HvKQ1X 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dp17HvKQ1X 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83422 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83422 /var/tmp/bdevperf.sock 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83422 ']' 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:47.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.082 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.082 [2024-11-26 16:20:12.552524] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:14:47.082 [2024-11-26 16:20:12.552644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83422 ] 00:14:47.082 [2024-11-26 16:20:12.697224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.082 [2024-11-26 16:20:12.719633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.341 [2024-11-26 16:20:12.751335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:47.341 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.341 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:47.341 16:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dp17HvKQ1X 00:14:47.599 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:47.857 [2024-11-26 16:20:13.432451] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:47.858 [2024-11-26 16:20:13.442643] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:47.858 [2024-11-26 16:20:13.442689] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:47.858 [2024-11-26 16:20:13.442742] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:47.858 [2024-11-26 16:20:13.443268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2309a30 (107): Transport endpoint is not connected 00:14:47.858 [2024-11-26 16:20:13.444259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2309a30 (9): Bad file descriptor 00:14:47.858 [2024-11-26 16:20:13.445255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:14:47.858 [2024-11-26 16:20:13.445296] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:47.858 [2024-11-26 16:20:13.445319] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:47.858 [2024-11-26 16:20:13.445334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:14:47.858 request: 00:14:47.858 { 00:14:47.858 "name": "TLSTEST", 00:14:47.858 "trtype": "tcp", 00:14:47.858 "traddr": "10.0.0.3", 00:14:47.858 "adrfam": "ipv4", 00:14:47.858 "trsvcid": "4420", 00:14:47.858 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:47.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:47.858 "prchk_reftag": false, 00:14:47.858 "prchk_guard": false, 00:14:47.858 "hdgst": false, 00:14:47.858 "ddgst": false, 00:14:47.858 "psk": "key0", 00:14:47.858 "allow_unrecognized_csi": false, 00:14:47.858 "method": "bdev_nvme_attach_controller", 00:14:47.858 "req_id": 1 00:14:47.858 } 00:14:47.858 Got JSON-RPC error response 00:14:47.858 response: 00:14:47.858 { 00:14:47.858 "code": -5, 00:14:47.858 "message": "Input/output error" 00:14:47.858 } 00:14:47.858 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83422 00:14:47.858 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83422 ']' 00:14:47.858 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83422 00:14:47.858 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:47.858 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.858 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83422 00:14:48.116 killing process with pid 83422 00:14:48.116 Received shutdown signal, test time was about 10.000000 seconds 00:14:48.116 00:14:48.116 Latency(us) 00:14:48.116 [2024-11-26T16:20:13.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.116 [2024-11-26T16:20:13.769Z] =================================================================================================================== 00:14:48.116 [2024-11-26T16:20:13.769Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83422' 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83422 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83422 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83443 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83443 /var/tmp/bdevperf.sock 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83443 ']' 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.116 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.116 [2024-11-26 16:20:13.690897] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:14:48.116 [2024-11-26 16:20:13.691013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83443 ] 00:14:48.374 [2024-11-26 16:20:13.840078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.375 [2024-11-26 16:20:13.867527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.375 [2024-11-26 16:20:13.901082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:48.375 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.375 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:48.375 16:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:48.940 [2024-11-26 16:20:14.305223] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:48.940 [2024-11-26 16:20:14.305274] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:48.940 request: 00:14:48.940 { 00:14:48.940 "name": "key0", 00:14:48.940 "path": "", 00:14:48.940 "method": "keyring_file_add_key", 00:14:48.940 "req_id": 1 00:14:48.940 } 00:14:48.940 Got JSON-RPC error response 00:14:48.940 response: 00:14:48.940 { 00:14:48.940 "code": -1, 00:14:48.940 "message": "Operation not permitted" 00:14:48.940 } 00:14:48.940 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:49.206 [2024-11-26 16:20:14.625488] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:49.206 [2024-11-26 16:20:14.625580] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:49.206 request: 00:14:49.206 { 00:14:49.206 "name": "TLSTEST", 00:14:49.206 "trtype": "tcp", 00:14:49.206 "traddr": "10.0.0.3", 00:14:49.206 "adrfam": "ipv4", 00:14:49.206 "trsvcid": "4420", 00:14:49.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:49.206 "prchk_reftag": false, 00:14:49.206 "prchk_guard": false, 00:14:49.206 "hdgst": false, 00:14:49.206 "ddgst": false, 00:14:49.206 "psk": "key0", 00:14:49.206 "allow_unrecognized_csi": false, 00:14:49.206 "method": "bdev_nvme_attach_controller", 00:14:49.206 "req_id": 1 00:14:49.206 } 00:14:49.206 Got JSON-RPC error response 00:14:49.206 response: 00:14:49.206 { 00:14:49.206 "code": -126, 00:14:49.206 "message": "Required key not available" 00:14:49.206 } 00:14:49.206 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83443 00:14:49.206 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83443 ']' 00:14:49.206 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83443 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83443 00:14:49.207 killing process with pid 83443 00:14:49.207 Received shutdown signal, test time was about 10.000000 seconds 00:14:49.207 00:14:49.207 Latency(us) 00:14:49.207 [2024-11-26T16:20:14.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.207 [2024-11-26T16:20:14.860Z] =================================================================================================================== 00:14:49.207 [2024-11-26T16:20:14.860Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83443' 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83443 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83443 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 83021 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83021 ']' 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83021 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83021 00:14:49.207 killing process with pid 83021 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83021' 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83021 00:14:49.207 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83021 00:14:49.476 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:49.476 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:49.476 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:49.476 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:49.476 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:49.476 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:14:49.476 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:49.476 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:49.476 16:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:49.476 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.dyeRk31OMC 00:14:49.476 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:49.476 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.dyeRk31OMC 00:14:49.476 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:49.476 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:49.476 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:49.476 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.476 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:49.476 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83474 00:14:49.476 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83474 00:14:49.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.476 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83474 ']' 00:14:49.476 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.476 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.476 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.476 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.476 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.476 [2024-11-26 16:20:15.070838] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:14:49.476 [2024-11-26 16:20:15.070940] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.734 [2024-11-26 16:20:15.214520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.734 [2024-11-26 16:20:15.233099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.734 [2024-11-26 16:20:15.233435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.734 [2024-11-26 16:20:15.233471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.734 [2024-11-26 16:20:15.233481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.734 [2024-11-26 16:20:15.233489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.734 [2024-11-26 16:20:15.233806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.734 [2024-11-26 16:20:15.262586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:49.734 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.734 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:49.734 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:49.734 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:49.734 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.734 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.734 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.dyeRk31OMC 00:14:49.734 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dyeRk31OMC 00:14:49.734 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:49.991 [2024-11-26 16:20:15.610546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.992 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:50.250 16:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:50.817 [2024-11-26 16:20:16.162669] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:50.817 [2024-11-26 16:20:16.162930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:50.817 16:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:51.075 malloc0 00:14:51.075 16:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:51.334 16:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dyeRk31OMC 00:14:51.591 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:51.850 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dyeRk31OMC 00:14:51.850 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:51.850 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:51.850 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:51.850 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dyeRk31OMC 00:14:51.850 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:51.850 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83522 00:14:51.850 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:51.850 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:51.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:51.850 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83522 /var/tmp/bdevperf.sock 00:14:51.850 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83522 ']' 00:14:51.850 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:51.850 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.850 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:51.850 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.850 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:51.850 [2024-11-26 16:20:17.375760] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:14:51.850 [2024-11-26 16:20:17.376029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83522 ] 00:14:52.109 [2024-11-26 16:20:17.524871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.109 [2024-11-26 16:20:17.546813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.109 [2024-11-26 16:20:17.577851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:52.109 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.109 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:52.109 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dyeRk31OMC 00:14:52.367 16:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:52.625 [2024-11-26 16:20:18.134495] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:52.625 TLSTESTn1 00:14:52.625 16:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:52.883 Running I/O for 10 seconds... 00:14:54.748 3339.00 IOPS, 13.04 MiB/s [2024-11-26T16:20:21.774Z] 3355.00 IOPS, 13.11 MiB/s [2024-11-26T16:20:22.706Z] 3471.33 IOPS, 13.56 MiB/s [2024-11-26T16:20:23.639Z] 3639.00 IOPS, 14.21 MiB/s [2024-11-26T16:20:24.573Z] 3787.80 IOPS, 14.80 MiB/s [2024-11-26T16:20:25.506Z] 3894.00 IOPS, 15.21 MiB/s [2024-11-26T16:20:26.439Z] 3968.57 IOPS, 15.50 MiB/s [2024-11-26T16:20:27.372Z] 4023.38 IOPS, 15.72 MiB/s [2024-11-26T16:20:28.745Z] 4065.33 IOPS, 15.88 MiB/s [2024-11-26T16:20:28.745Z] 4104.20 IOPS, 16.03 MiB/s 00:15:03.092 Latency(us) 00:15:03.092 [2024-11-26T16:20:28.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.092 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:03.092 Verification LBA range: start 0x0 length 0x2000 00:15:03.092 TLSTESTn1 : 10.02 4109.36 16.05 0.00 0.00 31091.10 6553.60 31218.97 00:15:03.092 [2024-11-26T16:20:28.745Z] =================================================================================================================== 00:15:03.092 [2024-11-26T16:20:28.745Z] Total : 4109.36 16.05 0.00 0.00 31091.10 6553.60 31218.97 00:15:03.092 { 00:15:03.092 "results": [ 00:15:03.092 { 00:15:03.092 "job": "TLSTESTn1", 00:15:03.092 "core_mask": "0x4", 00:15:03.092 "workload": "verify", 00:15:03.092 "status": "finished", 00:15:03.092 "verify_range": { 00:15:03.092 "start": 0, 00:15:03.092 "length": 8192 00:15:03.092 }, 00:15:03.092 "queue_depth": 128, 00:15:03.092 "io_size": 4096, 00:15:03.092 "runtime": 10.01836, 00:15:03.092 "iops": 4109.355223809087, 00:15:03.092 "mibps": 16.052168843004246, 00:15:03.092 "io_failed": 0, 00:15:03.092 "io_timeout": 0, 00:15:03.092 "avg_latency_us": 31091.101169061454, 00:15:03.092 "min_latency_us": 6553.6, 00:15:03.092 "max_latency_us": 31218.967272727274 00:15:03.092 } 00:15:03.092 ], 00:15:03.092 "core_count": 1 00:15:03.092 } 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83522 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83522 ']' 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83522 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83522 00:15:03.092 killing process with pid 83522 00:15:03.092 Received shutdown signal, test time was about 10.000000 seconds 00:15:03.092 00:15:03.092 Latency(us) 00:15:03.092 [2024-11-26T16:20:28.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.092 [2024-11-26T16:20:28.745Z] =================================================================================================================== 00:15:03.092 [2024-11-26T16:20:28.745Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83522' 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83522 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83522 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.dyeRk31OMC 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dyeRk31OMC 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dyeRk31OMC 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dyeRk31OMC 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dyeRk31OMC 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83650 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83650 /var/tmp/bdevperf.sock 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83650 ']' 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:03.092 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.093 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:03.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:03.093 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.093 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.093 [2024-11-26 16:20:28.607397] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:15:03.093 [2024-11-26 16:20:28.607680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83650 ] 00:15:03.351 [2024-11-26 16:20:28.750928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.351 [2024-11-26 16:20:28.770494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.351 [2024-11-26 16:20:28.799665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:03.351 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.351 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:03.351 16:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dyeRk31OMC 00:15:03.609 [2024-11-26 16:20:29.108286] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.dyeRk31OMC': 0100666 00:15:03.609 [2024-11-26 16:20:29.108537] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:03.609 request: 00:15:03.609 { 00:15:03.609 "name": "key0", 00:15:03.609 "path": "/tmp/tmp.dyeRk31OMC", 00:15:03.609 "method": "keyring_file_add_key", 00:15:03.609 "req_id": 1 00:15:03.609 } 00:15:03.609 Got JSON-RPC error response 00:15:03.609 response: 00:15:03.609 { 00:15:03.609 "code": -1, 00:15:03.609 "message": "Operation not permitted" 00:15:03.609 } 00:15:03.609 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:03.867 [2024-11-26 16:20:29.392457] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:03.867 [2024-11-26 16:20:29.392532] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:03.867 request: 00:15:03.867 { 00:15:03.867 "name": "TLSTEST", 00:15:03.867 "trtype": "tcp", 00:15:03.867 "traddr": "10.0.0.3", 00:15:03.867 "adrfam": "ipv4", 00:15:03.867 "trsvcid": "4420", 00:15:03.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:03.867 "prchk_reftag": false, 00:15:03.867 "prchk_guard": false, 00:15:03.867 "hdgst": false, 00:15:03.867 "ddgst": false, 00:15:03.867 "psk": "key0", 00:15:03.867 "allow_unrecognized_csi": false, 00:15:03.867 "method": "bdev_nvme_attach_controller", 00:15:03.867 "req_id": 1 00:15:03.867 } 00:15:03.867 Got JSON-RPC error response 00:15:03.867 response: 00:15:03.867 { 00:15:03.867 "code": -126, 00:15:03.867 "message": "Required key not available" 00:15:03.867 } 00:15:03.867 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83650 00:15:03.867 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83650 ']' 00:15:03.867 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83650 00:15:03.867 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:03.867 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:03.867 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83650 00:15:03.867 killing process with pid 83650 00:15:03.867 Received shutdown signal, test time was about 10.000000 seconds 00:15:03.867 00:15:03.867 Latency(us) 00:15:03.867 [2024-11-26T16:20:29.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.867 [2024-11-26T16:20:29.520Z] =================================================================================================================== 00:15:03.867 [2024-11-26T16:20:29.520Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:03.867 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:03.867 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:03.867 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83650' 00:15:03.867 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83650 00:15:03.867 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83650 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83474 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83474 ']' 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83474 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83474 00:15:04.126 killing process with pid 83474 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83474' 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83474 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83474 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83676 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83676 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83676 ']' 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:04.126 16:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.126 [2024-11-26 16:20:29.771209] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:15:04.126 [2024-11-26 16:20:29.771325] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.384 [2024-11-26 16:20:29.913648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.384 [2024-11-26 16:20:29.931962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.384 [2024-11-26 16:20:29.932018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.384 [2024-11-26 16:20:29.932045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.384 [2024-11-26 16:20:29.932052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.384 [2024-11-26 16:20:29.932059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.384 [2024-11-26 16:20:29.932316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.384 [2024-11-26 16:20:29.960431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:04.384 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.384 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:04.384 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:04.384 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:04.384 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.642 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.642 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.dyeRk31OMC 00:15:04.643 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:04.643 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.dyeRk31OMC 00:15:04.643 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:15:04.643 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:04.643 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:15:04.643 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:04.643 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.dyeRk31OMC 00:15:04.643 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dyeRk31OMC 00:15:04.643 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:04.643 [2024-11-26 16:20:30.268152] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.643 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:05.209 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:05.209 [2024-11-26 16:20:30.828201] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:05.209 [2024-11-26 16:20:30.828472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:05.209 16:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:05.467 malloc0 00:15:05.726 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:05.985 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dyeRk31OMC 00:15:05.985 [2024-11-26 16:20:31.614731] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.dyeRk31OMC': 0100666 00:15:05.985 [2024-11-26 16:20:31.614783] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:05.985 request: 00:15:05.985 { 00:15:05.985 "name": "key0", 00:15:05.985 "path": "/tmp/tmp.dyeRk31OMC", 00:15:05.985 "method": "keyring_file_add_key", 00:15:05.985 "req_id": 1 00:15:05.985 } 00:15:05.985 Got JSON-RPC error response 00:15:05.985 response: 00:15:05.985 { 00:15:05.985 "code": -1, 00:15:05.985 "message": "Operation not permitted" 00:15:05.985 } 00:15:06.244 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:06.244 [2024-11-26 16:20:31.858804] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:06.244 [2024-11-26 16:20:31.859114] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:06.244 request: 00:15:06.244 { 00:15:06.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.244 "host": "nqn.2016-06.io.spdk:host1", 00:15:06.244 "psk": "key0", 00:15:06.244 "method": "nvmf_subsystem_add_host", 00:15:06.244 "req_id": 1 00:15:06.244 } 00:15:06.244 Got JSON-RPC error response 00:15:06.244 response: 00:15:06.244 { 00:15:06.244 "code": -32603, 00:15:06.244 "message": "Internal error" 00:15:06.244 } 00:15:06.244 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:06.244 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:06.244 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:06.244 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:06.244 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83676 00:15:06.244 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83676 ']' 00:15:06.244 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83676 00:15:06.244 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:06.244 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:06.244 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83676 00:15:06.504 killing process with pid 83676 00:15:06.504 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:06.504 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:06.504 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83676' 00:15:06.504 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83676 00:15:06.504 16:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83676 00:15:06.504 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.dyeRk31OMC 00:15:06.504 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:06.504 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:06.504 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:06.504 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.504 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83738 00:15:06.504 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:06.504 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83738 00:15:06.504 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83738 ']' 00:15:06.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.504 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.504 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.504 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.504 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.504 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.504 [2024-11-26 16:20:32.105280] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:15:06.504 [2024-11-26 16:20:32.105610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.763 [2024-11-26 16:20:32.253134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.763 [2024-11-26 16:20:32.274605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.763 [2024-11-26 16:20:32.274685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.763 [2024-11-26 16:20:32.274712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.763 [2024-11-26 16:20:32.274720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.763 [2024-11-26 16:20:32.274744] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.763 [2024-11-26 16:20:32.275036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.763 [2024-11-26 16:20:32.306358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:06.763 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.763 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:06.763 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:06.763 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:06.763 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.763 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.763 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.dyeRk31OMC 00:15:06.763 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dyeRk31OMC 00:15:06.763 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:07.345 [2024-11-26 16:20:32.707642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.345 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:07.617 16:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:07.617 [2024-11-26 16:20:33.219787] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:07.617 [2024-11-26 16:20:33.220022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:07.617 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:08.184 malloc0 00:15:08.184 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:08.442 16:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dyeRk31OMC 00:15:08.700 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:08.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:08.959 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=83786 00:15:08.959 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:08.959 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:08.959 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 83786 /var/tmp/bdevperf.sock 00:15:08.959 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83786 ']' 00:15:08.959 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:08.959 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.959 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:08.959 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.959 16:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.959 [2024-11-26 16:20:34.528575] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:15:08.959 [2024-11-26 16:20:34.528948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83786 ] 00:15:09.217 [2024-11-26 16:20:34.686657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.217 [2024-11-26 16:20:34.710518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.217 [2024-11-26 16:20:34.743501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:10.151 16:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.151 16:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:10.151 16:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dyeRk31OMC 00:15:10.410 16:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:10.668 [2024-11-26 16:20:36.141241] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:10.668 TLSTESTn1 00:15:10.668 16:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:11.236 16:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:11.236 "subsystems": [ 00:15:11.236 { 00:15:11.236 "subsystem": "keyring", 00:15:11.236 "config": [ 00:15:11.236 { 00:15:11.236 "method": "keyring_file_add_key", 00:15:11.236 "params": { 00:15:11.236 "name": "key0", 00:15:11.236 "path": "/tmp/tmp.dyeRk31OMC" 00:15:11.236 } 00:15:11.236 } 00:15:11.236 ] 00:15:11.236 }, 00:15:11.236 { 00:15:11.236 "subsystem": "iobuf", 00:15:11.236 "config": [ 00:15:11.236 { 00:15:11.236 "method": "iobuf_set_options", 00:15:11.236 "params": { 00:15:11.236 "small_pool_count": 8192, 00:15:11.236 "large_pool_count": 1024, 00:15:11.236 "small_bufsize": 8192, 00:15:11.236 "large_bufsize": 135168, 00:15:11.236 "enable_numa": false 00:15:11.236 } 00:15:11.236 } 00:15:11.236 ] 00:15:11.236 }, 00:15:11.236 { 00:15:11.236 "subsystem": "sock", 00:15:11.236 "config": [ 00:15:11.236 { 00:15:11.236 "method": "sock_set_default_impl", 00:15:11.236 "params": { 00:15:11.236 "impl_name": "uring" 00:15:11.236 } 00:15:11.236 }, 00:15:11.236 { 00:15:11.236 "method": "sock_impl_set_options", 00:15:11.236 "params": { 00:15:11.236 "impl_name": "ssl", 00:15:11.236 "recv_buf_size": 4096, 00:15:11.236 "send_buf_size": 4096, 00:15:11.236 "enable_recv_pipe": true, 00:15:11.236 "enable_quickack": false, 00:15:11.236 "enable_placement_id": 0, 00:15:11.236 "enable_zerocopy_send_server": true, 00:15:11.236 "enable_zerocopy_send_client": false, 00:15:11.236 "zerocopy_threshold": 0, 00:15:11.236 "tls_version": 0, 00:15:11.236 "enable_ktls": false 00:15:11.236 } 00:15:11.236 }, 00:15:11.236 { 00:15:11.236 "method": "sock_impl_set_options", 00:15:11.236 "params": { 00:15:11.236 "impl_name": "posix", 00:15:11.236 "recv_buf_size": 2097152, 00:15:11.236 "send_buf_size": 2097152, 00:15:11.236 "enable_recv_pipe": true, 00:15:11.236 "enable_quickack": false, 00:15:11.236 "enable_placement_id": 0, 00:15:11.236 "enable_zerocopy_send_server": true, 00:15:11.236 "enable_zerocopy_send_client": false, 00:15:11.236 "zerocopy_threshold": 0, 00:15:11.236 "tls_version": 0, 00:15:11.236 "enable_ktls": false 00:15:11.236 } 00:15:11.236 }, 00:15:11.236 { 00:15:11.236 "method": "sock_impl_set_options", 00:15:11.236 "params": { 00:15:11.236 "impl_name": "uring", 00:15:11.236 "recv_buf_size": 2097152, 00:15:11.236 "send_buf_size": 2097152, 00:15:11.236 "enable_recv_pipe": true, 00:15:11.236 "enable_quickack": false, 00:15:11.236 "enable_placement_id": 0, 00:15:11.236 "enable_zerocopy_send_server": false, 00:15:11.236 "enable_zerocopy_send_client": false, 00:15:11.236 "zerocopy_threshold": 0, 00:15:11.236 "tls_version": 0, 00:15:11.236 "enable_ktls": false 00:15:11.236 } 00:15:11.236 } 00:15:11.236 ] 00:15:11.236 }, 00:15:11.236 { 00:15:11.236 "subsystem": "vmd", 00:15:11.236 "config": [] 00:15:11.236 }, 00:15:11.236 { 00:15:11.236 "subsystem": "accel", 00:15:11.236 "config": [ 00:15:11.236 { 00:15:11.236 "method": "accel_set_options", 00:15:11.236 "params": { 00:15:11.236 "small_cache_size": 128, 00:15:11.236 "large_cache_size": 16, 00:15:11.236 "task_count": 2048, 00:15:11.236 "sequence_count": 2048, 00:15:11.236 "buf_count": 2048 00:15:11.236 } 00:15:11.236 } 00:15:11.236 ] 00:15:11.236 }, 00:15:11.236 { 00:15:11.236 "subsystem": "bdev", 00:15:11.236 "config": [ 00:15:11.236 { 00:15:11.236 "method": "bdev_set_options", 00:15:11.236 "params": { 00:15:11.236 "bdev_io_pool_size": 65535, 00:15:11.236 "bdev_io_cache_size": 256, 00:15:11.236 "bdev_auto_examine": true, 00:15:11.236 "iobuf_small_cache_size": 128, 00:15:11.236 "iobuf_large_cache_size": 16 00:15:11.236 } 00:15:11.236 }, 00:15:11.236 { 00:15:11.236 "method": "bdev_raid_set_options", 00:15:11.236 "params": { 00:15:11.236 "process_window_size_kb": 1024, 00:15:11.236 "process_max_bandwidth_mb_sec": 0 00:15:11.236 } 00:15:11.236 }, 00:15:11.236 { 00:15:11.236 "method": "bdev_iscsi_set_options", 00:15:11.236 "params": { 00:15:11.236 "timeout_sec": 30 00:15:11.236 } 00:15:11.236 }, 00:15:11.236 { 00:15:11.236 "method": "bdev_nvme_set_options", 00:15:11.236 "params": { 00:15:11.236 "action_on_timeout": "none", 00:15:11.236 "timeout_us": 0, 00:15:11.236 "timeout_admin_us": 0, 00:15:11.236 "keep_alive_timeout_ms": 10000, 00:15:11.236 "arbitration_burst": 0, 00:15:11.236 "low_priority_weight": 0, 00:15:11.236 "medium_priority_weight": 0, 00:15:11.236 "high_priority_weight": 0, 00:15:11.236 "nvme_adminq_poll_period_us": 10000, 00:15:11.236 "nvme_ioq_poll_period_us": 0, 00:15:11.236 "io_queue_requests": 0, 00:15:11.236 "delay_cmd_submit": true, 00:15:11.236 "transport_retry_count": 4, 00:15:11.237 "bdev_retry_count": 3, 00:15:11.237 "transport_ack_timeout": 0, 00:15:11.237 "ctrlr_loss_timeout_sec": 0, 00:15:11.237 "reconnect_delay_sec": 0, 00:15:11.237 "fast_io_fail_timeout_sec": 0, 00:15:11.237 "disable_auto_failback": false, 00:15:11.237 "generate_uuids": false, 00:15:11.237 "transport_tos": 0, 00:15:11.237 "nvme_error_stat": false, 00:15:11.237 "rdma_srq_size": 0, 00:15:11.237 "io_path_stat": false, 00:15:11.237 "allow_accel_sequence": false, 00:15:11.237 "rdma_max_cq_size": 0, 00:15:11.237 "rdma_cm_event_timeout_ms": 0, 00:15:11.237 "dhchap_digests": [ 00:15:11.237 "sha256", 00:15:11.237 "sha384", 00:15:11.237 "sha512" 00:15:11.237 ], 00:15:11.237 "dhchap_dhgroups": [ 00:15:11.237 "null", 00:15:11.237 "ffdhe2048", 00:15:11.237 "ffdhe3072", 00:15:11.237 "ffdhe4096", 00:15:11.237 "ffdhe6144", 00:15:11.237 "ffdhe8192" 00:15:11.237 ] 00:15:11.237 } 00:15:11.237 }, 00:15:11.237 { 00:15:11.237 "method": "bdev_nvme_set_hotplug", 00:15:11.237 "params": { 00:15:11.237 "period_us": 100000, 00:15:11.237 "enable": false 00:15:11.237 } 00:15:11.237 }, 00:15:11.237 { 00:15:11.237 "method": "bdev_malloc_create", 00:15:11.237 "params": { 00:15:11.237 "name": "malloc0", 00:15:11.237 "num_blocks": 8192, 00:15:11.237 "block_size": 4096, 00:15:11.237 "physical_block_size": 4096, 00:15:11.237 "uuid": "5c3c0b90-6483-4833-ab49-2adfe274920a", 00:15:11.237 "optimal_io_boundary": 0, 00:15:11.237 "md_size": 0, 00:15:11.237 "dif_type": 0, 00:15:11.237 "dif_is_head_of_md": false, 00:15:11.237 "dif_pi_format": 0 00:15:11.237 } 00:15:11.237 }, 00:15:11.237 { 00:15:11.237 "method": "bdev_wait_for_examine" 00:15:11.237 } 00:15:11.237 ] 00:15:11.237 }, 00:15:11.237 { 00:15:11.237 "subsystem": "nbd", 00:15:11.237 "config": [] 00:15:11.237 }, 00:15:11.237 { 00:15:11.237 "subsystem": "scheduler", 00:15:11.237 "config": [ 00:15:11.237 { 00:15:11.237 "method": "framework_set_scheduler", 00:15:11.237 "params": { 00:15:11.237 "name": "static" 00:15:11.237 } 00:15:11.237 } 00:15:11.237 ] 00:15:11.237 }, 00:15:11.237 { 00:15:11.237 "subsystem": "nvmf", 00:15:11.237 "config": [ 00:15:11.237 { 00:15:11.237 "method": "nvmf_set_config", 00:15:11.237 "params": { 00:15:11.237 "discovery_filter": "match_any", 00:15:11.237 "admin_cmd_passthru": { 00:15:11.237 "identify_ctrlr": false 00:15:11.237 }, 00:15:11.237 "dhchap_digests": [ 00:15:11.237 "sha256", 00:15:11.237 "sha384", 00:15:11.237 "sha512" 00:15:11.237 ], 00:15:11.237 "dhchap_dhgroups": [ 00:15:11.237 "null", 00:15:11.237 "ffdhe2048", 00:15:11.237 "ffdhe3072", 00:15:11.237 "ffdhe4096", 00:15:11.237 "ffdhe6144", 00:15:11.237 "ffdhe8192" 00:15:11.237 ] 00:15:11.237 } 00:15:11.237 }, 00:15:11.237 { 00:15:11.237 "method": "nvmf_set_max_subsystems", 00:15:11.237 "params": { 00:15:11.237 "max_subsystems": 1024 00:15:11.237 } 00:15:11.237 }, 00:15:11.237 { 00:15:11.237 "method": "nvmf_set_crdt", 00:15:11.237 "params": { 00:15:11.237 "crdt1": 0, 00:15:11.237 "crdt2": 0, 00:15:11.237 "crdt3": 0 00:15:11.237 } 00:15:11.237 }, 00:15:11.237 { 00:15:11.237 "method": "nvmf_create_transport", 00:15:11.237 "params": { 00:15:11.237 "trtype": "TCP", 00:15:11.237 "max_queue_depth": 128, 00:15:11.237 "max_io_qpairs_per_ctrlr": 127, 00:15:11.237 "in_capsule_data_size": 4096, 00:15:11.237 "max_io_size": 131072, 00:15:11.237 "io_unit_size": 131072, 00:15:11.237 "max_aq_depth": 128, 00:15:11.237 "num_shared_buffers": 511, 00:15:11.237 "buf_cache_size": 4294967295, 00:15:11.237 "dif_insert_or_strip": false, 00:15:11.237 "zcopy": false, 00:15:11.237 "c2h_success": false, 00:15:11.237 "sock_priority": 0, 00:15:11.237 "abort_timeout_sec": 1, 00:15:11.237 "ack_timeout": 0, 00:15:11.237 "data_wr_pool_size": 0 00:15:11.237 } 00:15:11.237 }, 00:15:11.237 { 00:15:11.237 "method": "nvmf_create_subsystem", 00:15:11.237 "params": { 00:15:11.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.237 "allow_any_host": false, 00:15:11.237 "serial_number": "SPDK00000000000001", 00:15:11.237 "model_number": "SPDK bdev Controller", 00:15:11.237 "max_namespaces": 10, 00:15:11.237 "min_cntlid": 1, 00:15:11.237 "max_cntlid": 65519, 00:15:11.237 "ana_reporting": false 00:15:11.237 } 00:15:11.237 }, 00:15:11.237 { 00:15:11.237 "method": "nvmf_subsystem_add_host", 00:15:11.237 "params": { 00:15:11.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.237 "host": "nqn.2016-06.io.spdk:host1", 00:15:11.237 "psk": "key0" 00:15:11.237 } 00:15:11.237 }, 00:15:11.237 { 00:15:11.237 "method": "nvmf_subsystem_add_ns", 00:15:11.237 "params": { 00:15:11.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.237 "namespace": { 00:15:11.237 "nsid": 1, 00:15:11.237 "bdev_name": "malloc0", 00:15:11.237 "nguid": "5C3C0B9064834833AB492ADFE274920A", 00:15:11.237 "uuid": "5c3c0b90-6483-4833-ab49-2adfe274920a", 00:15:11.237 "no_auto_visible": false 00:15:11.237 } 00:15:11.237 } 00:15:11.237 }, 00:15:11.237 { 00:15:11.237 "method": "nvmf_subsystem_add_listener", 00:15:11.237 "params": { 00:15:11.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.237 "listen_address": { 00:15:11.237 "trtype": "TCP", 00:15:11.237 "adrfam": "IPv4", 00:15:11.237 "traddr": "10.0.0.3", 00:15:11.237 "trsvcid": "4420" 00:15:11.237 }, 00:15:11.237 "secure_channel": true 00:15:11.237 } 00:15:11.237 } 00:15:11.237 ] 00:15:11.237 } 00:15:11.237 ] 00:15:11.237 }' 00:15:11.237 16:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:11.496 16:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:11.496 "subsystems": [ 00:15:11.496 { 00:15:11.496 "subsystem": "keyring", 00:15:11.496 "config": [ 00:15:11.496 { 00:15:11.496 "method": "keyring_file_add_key", 00:15:11.496 "params": { 00:15:11.496 "name": "key0", 00:15:11.496 "path": "/tmp/tmp.dyeRk31OMC" 00:15:11.496 } 00:15:11.496 } 00:15:11.496 ] 00:15:11.496 }, 00:15:11.496 { 00:15:11.496 "subsystem": "iobuf", 00:15:11.496 "config": [ 00:15:11.496 { 00:15:11.496 "method": "iobuf_set_options", 00:15:11.496 "params": { 00:15:11.496 "small_pool_count": 8192, 00:15:11.496 "large_pool_count": 1024, 00:15:11.496 "small_bufsize": 8192, 00:15:11.496 "large_bufsize": 135168, 00:15:11.496 "enable_numa": false 00:15:11.496 } 00:15:11.496 } 00:15:11.496 ] 00:15:11.496 }, 00:15:11.496 { 00:15:11.496 "subsystem": "sock", 00:15:11.496 "config": [ 00:15:11.496 { 00:15:11.496 "method": "sock_set_default_impl", 00:15:11.496 "params": { 00:15:11.496 "impl_name": "uring" 00:15:11.496 } 00:15:11.496 }, 00:15:11.496 { 00:15:11.496 "method": "sock_impl_set_options", 00:15:11.496 "params": { 00:15:11.496 "impl_name": "ssl", 00:15:11.496 "recv_buf_size": 4096, 00:15:11.496 "send_buf_size": 4096, 00:15:11.496 "enable_recv_pipe": true, 00:15:11.496 "enable_quickack": false, 00:15:11.496 "enable_placement_id": 0, 00:15:11.496 "enable_zerocopy_send_server": true, 00:15:11.496 "enable_zerocopy_send_client": false, 00:15:11.496 "zerocopy_threshold": 0, 00:15:11.496 "tls_version": 0, 00:15:11.496 "enable_ktls": false 00:15:11.496 } 00:15:11.496 }, 00:15:11.496 { 00:15:11.496 "method": "sock_impl_set_options", 00:15:11.496 "params": { 00:15:11.496 "impl_name": "posix", 00:15:11.496 "recv_buf_size": 2097152, 00:15:11.496 "send_buf_size": 2097152, 00:15:11.496 "enable_recv_pipe": true, 00:15:11.496 "enable_quickack": false, 00:15:11.496 "enable_placement_id": 0, 00:15:11.496 "enable_zerocopy_send_server": true, 00:15:11.496 "enable_zerocopy_send_client": false, 00:15:11.496 "zerocopy_threshold": 0, 00:15:11.496 "tls_version": 0, 00:15:11.496 "enable_ktls": false 00:15:11.496 } 00:15:11.496 }, 00:15:11.496 { 00:15:11.496 "method": "sock_impl_set_options", 00:15:11.496 "params": { 00:15:11.496 "impl_name": "uring", 00:15:11.496 "recv_buf_size": 2097152, 00:15:11.496 "send_buf_size": 2097152, 00:15:11.496 "enable_recv_pipe": true, 00:15:11.496 "enable_quickack": false, 00:15:11.496 "enable_placement_id": 0, 00:15:11.496 "enable_zerocopy_send_server": false, 00:15:11.496 "enable_zerocopy_send_client": false, 00:15:11.496 "zerocopy_threshold": 0, 00:15:11.496 "tls_version": 0, 00:15:11.496 "enable_ktls": false 00:15:11.496 } 00:15:11.496 } 00:15:11.496 ] 00:15:11.496 }, 00:15:11.496 { 00:15:11.496 "subsystem": "vmd", 00:15:11.496 "config": [] 00:15:11.496 }, 00:15:11.496 { 00:15:11.496 "subsystem": "accel", 00:15:11.496 "config": [ 00:15:11.496 { 00:15:11.496 "method": "accel_set_options", 00:15:11.496 "params": { 00:15:11.496 "small_cache_size": 128, 00:15:11.496 "large_cache_size": 16, 00:15:11.496 "task_count": 2048, 00:15:11.496 "sequence_count": 2048, 00:15:11.496 "buf_count": 2048 00:15:11.496 } 00:15:11.496 } 00:15:11.496 ] 00:15:11.496 }, 00:15:11.496 { 00:15:11.496 "subsystem": "bdev", 00:15:11.496 "config": [ 00:15:11.496 { 00:15:11.496 "method": "bdev_set_options", 00:15:11.496 "params": { 00:15:11.496 "bdev_io_pool_size": 65535, 00:15:11.496 "bdev_io_cache_size": 256, 00:15:11.496 "bdev_auto_examine": true, 00:15:11.496 "iobuf_small_cache_size": 128, 00:15:11.496 "iobuf_large_cache_size": 16 00:15:11.496 } 00:15:11.496 }, 00:15:11.496 { 00:15:11.496 "method": "bdev_raid_set_options", 00:15:11.496 "params": { 00:15:11.496 "process_window_size_kb": 1024, 00:15:11.496 "process_max_bandwidth_mb_sec": 0 00:15:11.496 } 00:15:11.496 }, 00:15:11.496 { 00:15:11.496 "method": "bdev_iscsi_set_options", 00:15:11.496 "params": { 00:15:11.496 "timeout_sec": 30 00:15:11.496 } 00:15:11.496 }, 00:15:11.496 { 00:15:11.496 "method": "bdev_nvme_set_options", 00:15:11.496 "params": { 00:15:11.496 "action_on_timeout": "none", 00:15:11.496 "timeout_us": 0, 00:15:11.496 "timeout_admin_us": 0, 00:15:11.496 "keep_alive_timeout_ms": 10000, 00:15:11.496 "arbitration_burst": 0, 00:15:11.496 "low_priority_weight": 0, 00:15:11.496 "medium_priority_weight": 0, 00:15:11.496 "high_priority_weight": 0, 00:15:11.496 "nvme_adminq_poll_period_us": 10000, 00:15:11.496 "nvme_ioq_poll_period_us": 0, 00:15:11.496 "io_queue_requests": 512, 00:15:11.496 "delay_cmd_submit": true, 00:15:11.496 "transport_retry_count": 4, 00:15:11.496 "bdev_retry_count": 3, 00:15:11.496 "transport_ack_timeout": 0, 00:15:11.496 "ctrlr_loss_timeout_sec": 0, 00:15:11.496 "reconnect_delay_sec": 0, 00:15:11.496 "fast_io_fail_timeout_sec": 0, 00:15:11.496 "disable_auto_failback": false, 00:15:11.496 "generate_uuids": false, 00:15:11.496 "transport_tos": 0, 00:15:11.496 "nvme_error_stat": false, 00:15:11.496 "rdma_srq_size": 0, 00:15:11.496 "io_path_stat": false, 00:15:11.496 "allow_accel_sequence": false, 00:15:11.496 "rdma_max_cq_size": 0, 00:15:11.496 "rdma_cm_event_timeout_ms": 0, 00:15:11.496 "dhchap_digests": [ 00:15:11.496 "sha256", 00:15:11.496 "sha384", 00:15:11.496 "sha512" 00:15:11.496 ], 00:15:11.496 "dhchap_dhgroups": [ 00:15:11.496 "null", 00:15:11.496 "ffdhe2048", 00:15:11.496 "ffdhe3072", 00:15:11.496 "ffdhe4096", 00:15:11.496 "ffdhe6144", 00:15:11.496 "ffdhe8192" 00:15:11.496 ] 00:15:11.496 } 00:15:11.496 }, 00:15:11.496 { 00:15:11.496 "method": "bdev_nvme_attach_controller", 00:15:11.496 "params": { 00:15:11.496 "name": "TLSTEST", 00:15:11.496 "trtype": "TCP", 00:15:11.496 "adrfam": "IPv4", 00:15:11.496 "traddr": "10.0.0.3", 00:15:11.496 "trsvcid": "4420", 00:15:11.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.496 "prchk_reftag": false, 00:15:11.496 "prchk_guard": false, 00:15:11.496 "ctrlr_loss_timeout_sec": 0, 00:15:11.496 "reconnect_delay_sec": 0, 00:15:11.496 "fast_io_fail_timeout_sec": 0, 00:15:11.496 "psk": "key0", 00:15:11.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:11.496 "hdgst": false, 00:15:11.496 "ddgst": false, 00:15:11.496 "multipath": "multipath" 00:15:11.496 } 00:15:11.496 }, 00:15:11.496 { 00:15:11.496 "method": "bdev_nvme_set_hotplug", 00:15:11.496 "params": { 00:15:11.496 "period_us": 100000, 00:15:11.496 "enable": false 00:15:11.496 } 00:15:11.496 }, 00:15:11.496 { 00:15:11.496 "method": "bdev_wait_for_examine" 00:15:11.496 } 00:15:11.496 ] 00:15:11.496 }, 00:15:11.496 { 00:15:11.496 "subsystem": "nbd", 00:15:11.496 "config": [] 00:15:11.496 } 00:15:11.496 ] 00:15:11.496 }' 00:15:11.496 16:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 83786 00:15:11.496 16:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83786 ']' 00:15:11.496 16:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83786 00:15:11.496 16:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:11.496 16:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.496 16:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83786 00:15:11.496 killing process with pid 83786 00:15:11.496 Received shutdown signal, test time was about 10.000000 seconds 00:15:11.496 00:15:11.496 Latency(us) 00:15:11.496 [2024-11-26T16:20:37.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.496 [2024-11-26T16:20:37.149Z] =================================================================================================================== 00:15:11.496 [2024-11-26T16:20:37.149Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:11.496 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:11.496 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:11.496 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83786' 00:15:11.496 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83786 00:15:11.496 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83786 00:15:11.496 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 83738 00:15:11.496 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83738 ']' 00:15:11.496 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83738 00:15:11.496 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:11.756 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.756 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83738 00:15:11.756 killing process with pid 83738 00:15:11.756 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:11.756 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:11.756 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83738' 00:15:11.756 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83738 00:15:11.756 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83738 00:15:11.756 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:11.756 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:11.756 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:11.756 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.756 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:11.756 "subsystems": [ 00:15:11.756 { 00:15:11.756 "subsystem": "keyring", 00:15:11.756 "config": [ 00:15:11.756 { 00:15:11.756 "method": "keyring_file_add_key", 00:15:11.756 "params": { 00:15:11.756 "name": "key0", 00:15:11.756 "path": "/tmp/tmp.dyeRk31OMC" 00:15:11.756 } 00:15:11.756 } 00:15:11.756 ] 00:15:11.756 }, 00:15:11.756 { 00:15:11.756 "subsystem": "iobuf", 00:15:11.756 "config": [ 00:15:11.756 { 00:15:11.756 "method": "iobuf_set_options", 00:15:11.756 "params": { 00:15:11.756 "small_pool_count": 8192, 00:15:11.756 "large_pool_count": 1024, 00:15:11.756 "small_bufsize": 8192, 00:15:11.756 "large_bufsize": 135168, 00:15:11.756 "enable_numa": false 00:15:11.756 } 00:15:11.756 } 00:15:11.756 ] 00:15:11.756 }, 00:15:11.756 { 00:15:11.756 "subsystem": "sock", 00:15:11.756 "config": [ 00:15:11.756 { 00:15:11.757 "method": "sock_set_default_impl", 00:15:11.757 "params": { 00:15:11.757 "impl_name": "uring" 00:15:11.757 } 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "method": "sock_impl_set_options", 00:15:11.757 "params": { 00:15:11.757 "impl_name": "ssl", 00:15:11.757 "recv_buf_size": 4096, 00:15:11.757 "send_buf_size": 4096, 00:15:11.757 "enable_recv_pipe": true, 00:15:11.757 "enable_quickack": false, 00:15:11.757 "enable_placement_id": 0, 00:15:11.757 "enable_zerocopy_send_server": true, 00:15:11.757 "enable_zerocopy_send_client": false, 00:15:11.757 "zerocopy_threshold": 0, 00:15:11.757 "tls_version": 0, 00:15:11.757 "enable_ktls": false 00:15:11.757 } 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "method": "sock_impl_set_options", 00:15:11.757 "params": { 00:15:11.757 "impl_name": "posix", 00:15:11.757 "recv_buf_size": 2097152, 00:15:11.757 "send_buf_size": 2097152, 00:15:11.757 "enable_recv_pipe": true, 00:15:11.757 "enable_quickack": false, 00:15:11.757 "enable_placement_id": 0, 00:15:11.757 "enable_zerocopy_send_server": true, 00:15:11.757 "enable_zerocopy_send_client": false, 00:15:11.757 "zerocopy_threshold": 0, 00:15:11.757 "tls_version": 0, 00:15:11.757 "enable_ktls": false 00:15:11.757 } 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "method": "sock_impl_set_options", 00:15:11.757 "params": { 00:15:11.757 "impl_name": "uring", 00:15:11.757 "recv_buf_size": 2097152, 00:15:11.757 "send_buf_size": 2097152, 00:15:11.757 "enable_recv_pipe": true, 00:15:11.757 "enable_quickack": false, 00:15:11.757 "enable_placement_id": 0, 00:15:11.757 "enable_zerocopy_send_server": false, 00:15:11.757 "enable_zerocopy_send_client": false, 00:15:11.757 "zerocopy_threshold": 0, 00:15:11.757 "tls_version": 0, 00:15:11.757 "enable_ktls": false 00:15:11.757 } 00:15:11.757 } 00:15:11.757 ] 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "subsystem": "vmd", 00:15:11.757 "config": [] 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "subsystem": "accel", 00:15:11.757 "config": [ 00:15:11.757 { 00:15:11.757 "method": "accel_set_options", 00:15:11.757 "params": { 00:15:11.757 "small_cache_size": 128, 00:15:11.757 "large_cache_size": 16, 00:15:11.757 "task_count": 2048, 00:15:11.757 "sequence_count": 2048, 00:15:11.757 "buf_count": 2048 00:15:11.757 } 00:15:11.757 } 00:15:11.757 ] 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "subsystem": "bdev", 00:15:11.757 "config": [ 00:15:11.757 { 00:15:11.757 "method": "bdev_set_options", 00:15:11.757 "params": { 00:15:11.757 "bdev_io_pool_size": 65535, 00:15:11.757 "bdev_io_cache_size": 256, 00:15:11.757 "bdev_auto_examine": true, 00:15:11.757 "iobuf_small_cache_size": 128, 00:15:11.757 "iobuf_large_cache_size": 16 00:15:11.757 } 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "method": "bdev_raid_set_options", 00:15:11.757 "params": { 00:15:11.757 "process_window_size_kb": 1024, 00:15:11.757 "process_max_bandwidth_mb_sec": 0 00:15:11.757 } 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "method": "bdev_iscsi_set_options", 00:15:11.757 "params": { 00:15:11.757 "timeout_sec": 30 00:15:11.757 } 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "method": "bdev_nvme_set_options", 00:15:11.757 "params": { 00:15:11.757 "action_on_timeout": "none", 00:15:11.757 "timeout_us": 0, 00:15:11.757 "timeout_admin_us": 0, 00:15:11.757 "keep_alive_timeout_ms": 10000, 00:15:11.757 "arbitration_burst": 0, 00:15:11.757 "low_priority_weight": 0, 00:15:11.757 "medium_priority_weight": 0, 00:15:11.757 "high_priority_weight": 0, 00:15:11.757 "nvme_adminq_poll_period_us": 10000, 00:15:11.757 "nvme_ioq_poll_period_us": 0, 00:15:11.757 "io_queue_requests": 0, 00:15:11.757 "delay_cmd_submit": true, 00:15:11.757 "transport_retry_count": 4, 00:15:11.757 "bdev_retry_count": 3, 00:15:11.757 "transport_ack_timeout": 0, 00:15:11.757 "ctrlr_loss_timeout_sec": 0, 00:15:11.757 "reconnect_delay_sec": 0, 00:15:11.757 "fast_io_fail_timeout_sec": 0, 00:15:11.757 "disable_auto_failback": false, 00:15:11.757 "generate_uuids": false, 00:15:11.757 "transport_tos": 0, 00:15:11.757 "nvme_error_stat": false, 00:15:11.757 "rdma_srq_size": 0, 00:15:11.757 "io_path_stat": false, 00:15:11.757 "allow_accel_sequence": false, 00:15:11.757 "rdma_max_cq_size": 0, 00:15:11.757 "rdma_cm_event_timeout_ms": 0, 00:15:11.757 "dhchap_digests": [ 00:15:11.757 "sha256", 00:15:11.757 "sha384", 00:15:11.757 "sha512" 00:15:11.757 ], 00:15:11.757 "dhchap_dhgroups": [ 00:15:11.757 "null", 00:15:11.757 "ffdhe2048", 00:15:11.757 "ffdhe3072", 00:15:11.757 "ffdhe4096", 00:15:11.757 "ffdhe6144", 00:15:11.757 "ffdhe8192" 00:15:11.757 ] 00:15:11.757 } 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "method": "bdev_nvme_set_hotplug", 00:15:11.757 "params": { 00:15:11.757 "period_us": 100000, 00:15:11.757 "enable": false 00:15:11.757 } 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "method": "bdev_malloc_create", 00:15:11.757 "params": { 00:15:11.757 "name": "malloc0", 00:15:11.757 "num_blocks": 8192, 00:15:11.757 "block_size": 4096, 00:15:11.757 "physical_block_size": 4096, 00:15:11.757 "uuid": "5c3c0b90-6483-4833-ab49-2adfe274920a", 00:15:11.757 "optimal_io_boundary": 0, 00:15:11.757 "md_size": 0, 00:15:11.757 "dif_type": 0, 00:15:11.757 "dif_is_head_of_md": false, 00:15:11.757 "dif_pi_format": 0 00:15:11.757 } 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "method": "bdev_wait_for_examine" 00:15:11.757 } 00:15:11.757 ] 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "subsystem": "nbd", 00:15:11.757 "config": [] 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "subsystem": "scheduler", 00:15:11.757 "config": [ 00:15:11.757 { 00:15:11.757 "method": "framework_set_scheduler", 00:15:11.757 "params": { 00:15:11.757 "name": "static" 00:15:11.757 } 00:15:11.757 } 00:15:11.757 ] 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "subsystem": "nvmf", 00:15:11.757 "config": [ 00:15:11.757 { 00:15:11.757 "method": "nvmf_set_config", 00:15:11.757 "params": { 00:15:11.757 "discovery_filter": "match_any", 00:15:11.757 "admin_cmd_passthru": { 00:15:11.757 "identify_ctrlr": false 00:15:11.757 }, 00:15:11.757 "dhchap_digests": [ 00:15:11.757 "sha256", 00:15:11.757 "sha384", 00:15:11.757 "sha512" 00:15:11.757 ], 00:15:11.757 "dhchap_dhgroups": [ 00:15:11.757 "null", 00:15:11.757 "ffdhe2048", 00:15:11.757 "ffdhe3072", 00:15:11.757 "ffdhe4096", 00:15:11.757 "ffdhe6144", 00:15:11.757 "ffdhe8192" 00:15:11.757 ] 00:15:11.757 } 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "method": "nvmf_set_max_subsystems", 00:15:11.757 "params": { 00:15:11.757 "max_subsystems": 1024 00:15:11.757 } 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "method": "nvmf_set_crdt", 00:15:11.757 "params": { 00:15:11.757 "crdt1": 0, 00:15:11.757 "crdt2": 0, 00:15:11.757 "crdt3": 0 00:15:11.757 } 00:15:11.757 }, 00:15:11.757 { 00:15:11.757 "method": "nvmf_create_transport", 00:15:11.757 "params": { 00:15:11.757 "trtype": "TCP", 00:15:11.757 "max_queue_depth": 128, 00:15:11.757 "max_io_qpairs_per_ctrlr": 127, 00:15:11.757 "in_capsule_data_size": 4096, 00:15:11.757 "max_io_size": 131072, 00:15:11.757 "io_unit_size": 131072, 00:15:11.757 "max_aq_depth": 128, 00:15:11.757 "num_shared_buffers": 511, 00:15:11.757 "buf_cache_size": 4294967295, 00:15:11.757 "dif_insert_or_strip": false, 00:15:11.757 "zcopy": false, 00:15:11.757 "c2h_success": false, 00:15:11.757 "sock_priority": 0, 00:15:11.757 "abort_timeout_sec": 1, 00:15:11.758 "ack_timeout": 0, 00:15:11.758 "data_wr_pool_size": 0 00:15:11.758 } 00:15:11.758 }, 00:15:11.758 { 00:15:11.758 "method": "nvmf_create_subsystem", 00:15:11.758 "params": { 00:15:11.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.758 "allow_any_host": false, 00:15:11.758 "serial_number": "SPDK00000000000001", 00:15:11.758 "model_number": "SPDK bdev Controller", 00:15:11.758 "max_namespaces": 10, 00:15:11.758 "min_cntlid": 1, 00:15:11.758 "max_cntlid": 65519, 00:15:11.758 "ana_reporting": false 00:15:11.758 } 00:15:11.758 }, 00:15:11.758 { 00:15:11.758 "method": "nvmf_subsystem_add_host", 00:15:11.758 "params": { 00:15:11.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.758 "host": "nqn.2016-06.io.spdk:host1", 00:15:11.758 "psk": "key0" 00:15:11.758 } 00:15:11.758 }, 00:15:11.758 { 00:15:11.758 "method": "nvmf_subsystem_add_ns", 00:15:11.758 "params": { 00:15:11.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.758 "namespace": { 00:15:11.758 "nsid": 1, 00:15:11.758 "bdev_name": "malloc0", 00:15:11.758 "nguid": "5C3C0B9064834833AB492ADFE274920A", 00:15:11.758 "uuid": "5c3c0b90-6483-4833-ab49-2adfe274920a", 00:15:11.758 "no_auto_visible": false 00:15:11.758 } 00:15:11.758 } 00:15:11.758 }, 00:15:11.758 { 00:15:11.758 "method": "nvmf_subsystem_add_listener", 00:15:11.758 "params": { 00:15:11.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.758 "listen_address": { 00:15:11.758 "trtype": "TCP", 00:15:11.758 "adrfam": "IPv4", 00:15:11.758 "traddr": "10.0.0.3", 00:15:11.758 "trsvcid": "4420" 00:15:11.758 }, 00:15:11.758 "secure_channel": true 00:15:11.758 } 00:15:11.758 } 00:15:11.758 ] 00:15:11.758 } 00:15:11.758 ] 00:15:11.758 }' 00:15:11.758 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83841 00:15:11.758 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:11.758 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83841 00:15:11.758 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83841 ']' 00:15:11.758 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.758 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.758 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.758 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.758 16:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.758 [2024-11-26 16:20:37.367664] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:15:11.758 [2024-11-26 16:20:37.367767] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.017 [2024-11-26 16:20:37.512031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.017 [2024-11-26 16:20:37.531848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.017 [2024-11-26 16:20:37.532104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.017 [2024-11-26 16:20:37.532125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.017 [2024-11-26 16:20:37.532134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.017 [2024-11-26 16:20:37.532141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.017 [2024-11-26 16:20:37.532529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.277 [2024-11-26 16:20:37.676467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:12.277 [2024-11-26 16:20:37.731393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.277 [2024-11-26 16:20:37.763314] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:12.277 [2024-11-26 16:20:37.763533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:12.845 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.845 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:12.845 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:12.845 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:12.845 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:12.845 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.845 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=83873 00:15:12.845 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 83873 /var/tmp/bdevperf.sock 00:15:12.845 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83873 ']' 00:15:12.845 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:12.845 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.845 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:12.845 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.845 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.845 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:12.845 16:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:12.845 "subsystems": [ 00:15:12.845 { 00:15:12.845 "subsystem": "keyring", 00:15:12.845 "config": [ 00:15:12.845 { 00:15:12.845 "method": "keyring_file_add_key", 00:15:12.845 "params": { 00:15:12.845 "name": "key0", 00:15:12.845 "path": "/tmp/tmp.dyeRk31OMC" 00:15:12.845 } 00:15:12.845 } 00:15:12.845 ] 00:15:12.845 }, 00:15:12.845 { 00:15:12.845 "subsystem": "iobuf", 00:15:12.845 "config": [ 00:15:12.845 { 00:15:12.845 "method": "iobuf_set_options", 00:15:12.845 "params": { 00:15:12.845 "small_pool_count": 8192, 00:15:12.845 "large_pool_count": 1024, 00:15:12.845 "small_bufsize": 8192, 00:15:12.845 "large_bufsize": 135168, 00:15:12.845 "enable_numa": false 00:15:12.845 } 00:15:12.845 } 00:15:12.845 ] 00:15:12.845 }, 00:15:12.845 { 00:15:12.845 "subsystem": "sock", 00:15:12.845 "config": [ 00:15:12.845 { 00:15:12.845 "method": "sock_set_default_impl", 00:15:12.845 "params": { 00:15:12.845 "impl_name": "uring" 00:15:12.845 } 00:15:12.845 }, 00:15:12.845 { 00:15:12.845 "method": "sock_impl_set_options", 00:15:12.845 "params": { 00:15:12.845 "impl_name": "ssl", 00:15:12.845 "recv_buf_size": 4096, 00:15:12.845 "send_buf_size": 4096, 00:15:12.845 "enable_recv_pipe": true, 00:15:12.845 "enable_quickack": false, 00:15:12.845 "enable_placement_id": 0, 00:15:12.845 "enable_zerocopy_send_server": true, 00:15:12.845 "enable_zerocopy_send_client": false, 00:15:12.845 "zerocopy_threshold": 0, 00:15:12.845 "tls_version": 0, 00:15:12.845 "enable_ktls": false 00:15:12.845 } 00:15:12.845 }, 00:15:12.845 { 00:15:12.845 "method": "sock_impl_set_options", 00:15:12.845 "params": { 00:15:12.845 "impl_name": "posix", 00:15:12.845 "recv_buf_size": 2097152, 00:15:12.845 "send_buf_size": 2097152, 00:15:12.845 "enable_recv_pipe": true, 00:15:12.845 "enable_quickack": false, 00:15:12.845 "enable_placement_id": 0, 00:15:12.845 "enable_zerocopy_send_server": true, 00:15:12.845 "enable_zerocopy_send_client": false, 00:15:12.845 "zerocopy_threshold": 0, 00:15:12.845 "tls_version": 0, 00:15:12.845 "enable_ktls": false 00:15:12.845 } 00:15:12.845 }, 00:15:12.845 { 00:15:12.845 "method": "sock_impl_set_options", 00:15:12.845 "params": { 00:15:12.845 "impl_name": "uring", 00:15:12.845 "recv_buf_size": 2097152, 00:15:12.845 "send_buf_size": 2097152, 00:15:12.845 "enable_recv_pipe": true, 00:15:12.845 "enable_quickack": false, 00:15:12.845 "enable_placement_id": 0, 00:15:12.845 "enable_zerocopy_send_server": false, 00:15:12.845 "enable_zerocopy_send_client": false, 00:15:12.845 "zerocopy_threshold": 0, 00:15:12.845 "tls_version": 0, 00:15:12.845 "enable_ktls": false 00:15:12.845 } 00:15:12.845 } 00:15:12.845 ] 00:15:12.845 }, 00:15:12.845 { 00:15:12.846 "subsystem": "vmd", 00:15:12.846 "config": [] 00:15:12.846 }, 00:15:12.846 { 00:15:12.846 "subsystem": "accel", 00:15:12.846 "config": [ 00:15:12.846 { 00:15:12.846 "method": "accel_set_options", 00:15:12.846 "params": { 00:15:12.846 "small_cache_size": 128, 00:15:12.846 "large_cache_size": 16, 00:15:12.846 "task_count": 2048, 00:15:12.846 "sequence_count": 2048, 00:15:12.846 "buf_count": 2048 00:15:12.846 } 00:15:12.846 } 00:15:12.846 ] 00:15:12.846 }, 00:15:12.846 { 00:15:12.846 "subsystem": "bdev", 00:15:12.846 "config": [ 00:15:12.846 { 00:15:12.846 "method": "bdev_set_options", 00:15:12.846 "params": { 00:15:12.846 "bdev_io_pool_size": 65535, 00:15:12.846 "bdev_io_cache_size": 256, 00:15:12.846 "bdev_auto_examine": true, 00:15:12.846 "iobuf_small_cache_size": 128, 00:15:12.846 "iobuf_large_cache_size": 16 00:15:12.846 } 00:15:12.846 }, 00:15:12.846 { 00:15:12.846 "method": "bdev_raid_set_options", 00:15:12.846 "params": { 00:15:12.846 "process_window_size_kb": 1024, 00:15:12.846 "process_max_bandwidth_mb_sec": 0 00:15:12.846 } 00:15:12.846 }, 00:15:12.846 { 00:15:12.846 "method": "bdev_iscsi_set_options", 00:15:12.846 "params": { 00:15:12.846 "timeout_sec": 30 00:15:12.846 } 00:15:12.846 }, 00:15:12.846 { 00:15:12.846 "method": "bdev_nvme_set_options", 00:15:12.846 "params": { 00:15:12.846 "action_on_timeout": "none", 00:15:12.846 "timeout_us": 0, 00:15:12.846 "timeout_admin_us": 0, 00:15:12.846 "keep_alive_timeout_ms": 10000, 00:15:12.846 "arbitration_burst": 0, 00:15:12.846 "low_priority_weight": 0, 00:15:12.846 "medium_priority_weight": 0, 00:15:12.846 "high_priority_weight": 0, 00:15:12.846 "nvme_adminq_poll_period_us": 10000, 00:15:12.846 "nvme_ioq_poll_period_us": 0, 00:15:12.846 "io_queue_requests": 512, 00:15:12.846 "delay_cmd_submit": true, 00:15:12.846 "transport_retry_count": 4, 00:15:12.846 "bdev_retry_count": 3, 00:15:12.846 "transport_ack_timeout": 0, 00:15:12.846 "ctrlr_loss_timeout_sec": 0, 00:15:12.846 "reconnect_delay_sec": 0, 00:15:12.846 "fast_io_fail_timeout_sec": 0, 00:15:12.846 "disable_auto_failback": false, 00:15:12.846 "generate_uuids": false, 00:15:12.846 "transport_tos": 0, 00:15:12.846 "nvme_error_stat": false, 00:15:12.846 "rdma_srq_size": 0, 00:15:12.846 "io_path_stat": false, 00:15:12.846 "allow_accel_sequence": false, 00:15:12.846 "rdma_max_cq_size": 0, 00:15:12.846 "rdma_cm_event_timeout_ms": 0, 00:15:12.846 "dhchap_digests": [ 00:15:12.846 "sha256", 00:15:12.846 "sha384", 00:15:12.846 "sha512" 00:15:12.846 ], 00:15:12.846 "dhchap_dhgroups": [ 00:15:12.846 "null", 00:15:12.846 "ffdhe2048", 00:15:12.846 "ffdhe3072", 00:15:12.846 "ffdhe4096", 00:15:12.846 "ffdhe6144", 00:15:12.846 "ffdhe8192" 00:15:12.846 ] 00:15:12.846 } 00:15:12.846 }, 00:15:12.846 { 00:15:12.846 "method": "bdev_nvme_attach_controller", 00:15:12.846 "params": { 00:15:12.846 "name": "TLSTEST", 00:15:12.846 "trtype": "TCP", 00:15:12.846 "adrfam": "IPv4", 00:15:12.846 "traddr": "10.0.0.3", 00:15:12.846 "trsvcid": "4420", 00:15:12.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.846 "prchk_reftag": false, 00:15:12.846 "prchk_guard": false, 00:15:12.846 "ctrlr_loss_timeout_sec": 0, 00:15:12.846 "reconnect_delay_sec": 0, 00:15:12.846 "fast_io_fail_timeout_sec": 0, 00:15:12.846 "psk": "key0", 00:15:12.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:12.846 "hdgst": false, 00:15:12.846 "ddgst": false, 00:15:12.846 "multipath": "multipath" 00:15:12.846 } 00:15:12.846 }, 00:15:12.846 { 00:15:12.846 "method": "bdev_nvme_set_hotplug", 00:15:12.846 "params": { 00:15:12.846 "period_us": 100000, 00:15:12.846 "enable": false 00:15:12.846 } 00:15:12.846 }, 00:15:12.846 { 00:15:12.846 "method": "bdev_wait_for_examine" 00:15:12.846 } 00:15:12.846 ] 00:15:12.846 }, 00:15:12.846 { 00:15:12.846 "subsystem": "nbd", 00:15:12.846 "config": [] 00:15:12.846 } 00:15:12.846 ] 00:15:12.846 }' 00:15:13.105 [2024-11-26 16:20:38.516209] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:15:13.105 [2024-11-26 16:20:38.516311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83873 ] 00:15:13.105 [2024-11-26 16:20:38.678830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.105 [2024-11-26 16:20:38.707015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.363 [2024-11-26 16:20:38.822648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:13.363 [2024-11-26 16:20:38.853534] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:14.298 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.298 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:14.298 16:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:14.298 Running I/O for 10 seconds... 00:15:16.168 3934.00 IOPS, 15.37 MiB/s [2024-11-26T16:20:42.758Z] 4019.50 IOPS, 15.70 MiB/s [2024-11-26T16:20:44.137Z] 4073.00 IOPS, 15.91 MiB/s [2024-11-26T16:20:45.075Z] 4097.00 IOPS, 16.00 MiB/s [2024-11-26T16:20:46.013Z] 4125.40 IOPS, 16.11 MiB/s [2024-11-26T16:20:46.952Z] 4147.67 IOPS, 16.20 MiB/s [2024-11-26T16:20:47.891Z] 4164.86 IOPS, 16.27 MiB/s [2024-11-26T16:20:48.826Z] 4178.00 IOPS, 16.32 MiB/s [2024-11-26T16:20:49.761Z] 4161.22 IOPS, 16.25 MiB/s [2024-11-26T16:20:49.761Z] 4117.00 IOPS, 16.08 MiB/s 00:15:24.108 Latency(us) 00:15:24.108 [2024-11-26T16:20:49.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.108 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:24.108 Verification LBA range: start 0x0 length 0x2000 00:15:24.108 TLSTESTn1 : 10.02 4122.12 16.10 0.00 0.00 30994.60 6642.97 37415.10 00:15:24.108 [2024-11-26T16:20:49.761Z] =================================================================================================================== 00:15:24.108 [2024-11-26T16:20:49.761Z] Total : 4122.12 16.10 0.00 0.00 30994.60 6642.97 37415.10 00:15:24.108 { 00:15:24.108 "results": [ 00:15:24.108 { 00:15:24.108 "job": "TLSTESTn1", 00:15:24.108 "core_mask": "0x4", 00:15:24.108 "workload": "verify", 00:15:24.108 "status": "finished", 00:15:24.108 "verify_range": { 00:15:24.108 "start": 0, 00:15:24.108 "length": 8192 00:15:24.108 }, 00:15:24.108 "queue_depth": 128, 00:15:24.108 "io_size": 4096, 00:15:24.108 "runtime": 10.018628, 00:15:24.108 "iops": 4122.121312419225, 00:15:24.108 "mibps": 16.1020363766376, 00:15:24.108 "io_failed": 0, 00:15:24.108 "io_timeout": 0, 00:15:24.108 "avg_latency_us": 30994.604521812635, 00:15:24.108 "min_latency_us": 6642.967272727273, 00:15:24.108 "max_latency_us": 37415.09818181818 00:15:24.108 } 00:15:24.108 ], 00:15:24.108 "core_count": 1 00:15:24.108 } 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 83873 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83873 ']' 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83873 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83873 00:15:24.367 killing process with pid 83873 00:15:24.367 Received shutdown signal, test time was about 10.000000 seconds 00:15:24.367 00:15:24.367 Latency(us) 00:15:24.367 [2024-11-26T16:20:50.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.367 [2024-11-26T16:20:50.020Z] =================================================================================================================== 00:15:24.367 [2024-11-26T16:20:50.020Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83873' 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83873 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83873 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 83841 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83841 ']' 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83841 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83841 00:15:24.367 killing process with pid 83841 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:24.367 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83841' 00:15:24.368 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83841 00:15:24.368 16:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83841 00:15:24.627 16:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:24.627 16:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:24.627 16:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:24.627 16:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.627 16:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84006 00:15:24.627 16:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:24.627 16:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84006 00:15:24.627 16:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84006 ']' 00:15:24.627 16:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.627 16:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.627 16:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.627 16:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.627 16:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.627 [2024-11-26 16:20:50.167808] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:15:24.627 [2024-11-26 16:20:50.167915] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.885 [2024-11-26 16:20:50.319784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.885 [2024-11-26 16:20:50.342318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.885 [2024-11-26 16:20:50.342393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.885 [2024-11-26 16:20:50.342408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.885 [2024-11-26 16:20:50.342419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.885 [2024-11-26 16:20:50.342428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.885 [2024-11-26 16:20:50.342764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.885 [2024-11-26 16:20:50.376719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:25.822 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.822 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:25.822 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:25.822 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:25.822 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:25.822 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.822 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.dyeRk31OMC 00:15:25.822 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dyeRk31OMC 00:15:25.822 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:25.822 [2024-11-26 16:20:51.408829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.822 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:26.081 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:26.339 [2024-11-26 16:20:51.916995] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:26.339 [2024-11-26 16:20:51.917244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:26.339 16:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:26.598 malloc0 00:15:26.598 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:26.856 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dyeRk31OMC 00:15:27.115 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:27.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:27.375 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84067 00:15:27.375 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:27.375 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:27.375 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84067 /var/tmp/bdevperf.sock 00:15:27.375 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84067 ']' 00:15:27.375 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:27.375 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.375 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:27.375 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.375 16:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.375 [2024-11-26 16:20:52.978610] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:15:27.375 [2024-11-26 16:20:52.979013] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84067 ] 00:15:27.634 [2024-11-26 16:20:53.127926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.634 [2024-11-26 16:20:53.152325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.634 [2024-11-26 16:20:53.187011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:27.634 16:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:27.634 16:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:27.634 16:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dyeRk31OMC 00:15:27.893 16:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:28.152 [2024-11-26 16:20:53.729571] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:28.411 nvme0n1 00:15:28.411 16:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:28.411 Running I/O for 1 seconds... 00:15:29.346 4112.00 IOPS, 16.06 MiB/s 00:15:29.346 Latency(us) 00:15:29.346 [2024-11-26T16:20:54.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.346 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:29.346 Verification LBA range: start 0x0 length 0x2000 00:15:29.346 nvme0n1 : 1.02 4160.71 16.25 0.00 0.00 30383.13 1727.77 18945.86 00:15:29.346 [2024-11-26T16:20:54.999Z] =================================================================================================================== 00:15:29.346 [2024-11-26T16:20:54.999Z] Total : 4160.71 16.25 0.00 0.00 30383.13 1727.77 18945.86 00:15:29.346 { 00:15:29.346 "results": [ 00:15:29.346 { 00:15:29.346 "job": "nvme0n1", 00:15:29.346 "core_mask": "0x2", 00:15:29.346 "workload": "verify", 00:15:29.346 "status": "finished", 00:15:29.346 "verify_range": { 00:15:29.346 "start": 0, 00:15:29.346 "length": 8192 00:15:29.346 }, 00:15:29.346 "queue_depth": 128, 00:15:29.346 "io_size": 4096, 00:15:29.346 "runtime": 1.019056, 00:15:29.346 "iops": 4160.713444599708, 00:15:29.346 "mibps": 16.25278689296761, 00:15:29.346 "io_failed": 0, 00:15:29.346 "io_timeout": 0, 00:15:29.346 "avg_latency_us": 30383.129468267583, 00:15:29.346 "min_latency_us": 1727.7672727272727, 00:15:29.346 "max_latency_us": 18945.861818181816 00:15:29.346 } 00:15:29.346 ], 00:15:29.346 "core_count": 1 00:15:29.346 } 00:15:29.346 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84067 00:15:29.346 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84067 ']' 00:15:29.346 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84067 00:15:29.346 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:29.346 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.346 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84067 00:15:29.605 killing process with pid 84067 00:15:29.605 Received shutdown signal, test time was about 1.000000 seconds 00:15:29.605 00:15:29.605 Latency(us) 00:15:29.605 [2024-11-26T16:20:55.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.605 [2024-11-26T16:20:55.258Z] =================================================================================================================== 00:15:29.605 [2024-11-26T16:20:55.258Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:29.605 16:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:29.605 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:29.605 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84067' 00:15:29.605 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84067 00:15:29.605 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84067 00:15:29.605 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84006 00:15:29.605 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84006 ']' 00:15:29.605 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84006 00:15:29.605 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:29.605 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.605 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84006 00:15:29.605 killing process with pid 84006 00:15:29.605 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.605 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.605 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84006' 00:15:29.605 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84006 00:15:29.605 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84006 00:15:29.863 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:29.863 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:29.863 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:29.863 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.863 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:29.863 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84105 00:15:29.863 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84105 00:15:29.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.863 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84105 ']' 00:15:29.863 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.863 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.863 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.863 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.864 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.864 [2024-11-26 16:20:55.356606] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:15:29.864 [2024-11-26 16:20:55.356915] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.864 [2024-11-26 16:20:55.502283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.122 [2024-11-26 16:20:55.522853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.122 [2024-11-26 16:20:55.523141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.122 [2024-11-26 16:20:55.523176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.122 [2024-11-26 16:20:55.523186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.122 [2024-11-26 16:20:55.523193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.122 [2024-11-26 16:20:55.523542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.122 [2024-11-26 16:20:55.552116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.123 [2024-11-26 16:20:55.640296] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.123 malloc0 00:15:30.123 [2024-11-26 16:20:55.666968] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:30.123 [2024-11-26 16:20:55.667164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84130 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84130 /var/tmp/bdevperf.sock 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84130 ']' 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:30.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.123 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.123 [2024-11-26 16:20:55.754040] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:15:30.123 [2024-11-26 16:20:55.754460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84130 ] 00:15:30.382 [2024-11-26 16:20:55.894282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.382 [2024-11-26 16:20:55.914841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.382 [2024-11-26 16:20:55.944254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:30.382 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.382 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:30.382 16:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dyeRk31OMC 00:15:30.640 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:31.207 [2024-11-26 16:20:56.548142] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:31.207 nvme0n1 00:15:31.207 16:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:31.207 Running I/O for 1 seconds... 00:15:32.144 4224.00 IOPS, 16.50 MiB/s 00:15:32.144 Latency(us) 00:15:32.144 [2024-11-26T16:20:57.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.144 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.144 Verification LBA range: start 0x0 length 0x2000 00:15:32.144 nvme0n1 : 1.03 4236.26 16.55 0.00 0.00 29883.59 7268.54 20018.27 00:15:32.144 [2024-11-26T16:20:57.797Z] =================================================================================================================== 00:15:32.144 [2024-11-26T16:20:57.797Z] Total : 4236.26 16.55 0.00 0.00 29883.59 7268.54 20018.27 00:15:32.144 { 00:15:32.144 "results": [ 00:15:32.144 { 00:15:32.144 "job": "nvme0n1", 00:15:32.144 "core_mask": "0x2", 00:15:32.144 "workload": "verify", 00:15:32.144 "status": "finished", 00:15:32.144 "verify_range": { 00:15:32.144 "start": 0, 00:15:32.144 "length": 8192 00:15:32.144 }, 00:15:32.144 "queue_depth": 128, 00:15:32.144 "io_size": 4096, 00:15:32.144 "runtime": 1.027321, 00:15:32.144 "iops": 4236.261110208007, 00:15:32.144 "mibps": 16.54789496175003, 00:15:32.144 "io_failed": 0, 00:15:32.144 "io_timeout": 0, 00:15:32.144 "avg_latency_us": 29883.591871657754, 00:15:32.144 "min_latency_us": 7268.538181818182, 00:15:32.144 "max_latency_us": 20018.269090909092 00:15:32.144 } 00:15:32.144 ], 00:15:32.144 "core_count": 1 00:15:32.144 } 00:15:32.403 16:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:32.403 16:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.403 16:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.403 16:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.403 16:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:32.403 "subsystems": [ 00:15:32.403 { 00:15:32.403 "subsystem": "keyring", 00:15:32.403 "config": [ 00:15:32.403 { 00:15:32.403 "method": "keyring_file_add_key", 00:15:32.403 "params": { 00:15:32.403 "name": "key0", 00:15:32.403 "path": "/tmp/tmp.dyeRk31OMC" 00:15:32.403 } 00:15:32.403 } 00:15:32.403 ] 00:15:32.403 }, 00:15:32.403 { 00:15:32.403 "subsystem": "iobuf", 00:15:32.403 "config": [ 00:15:32.403 { 00:15:32.403 "method": "iobuf_set_options", 00:15:32.403 "params": { 00:15:32.403 "small_pool_count": 8192, 00:15:32.403 "large_pool_count": 1024, 00:15:32.403 "small_bufsize": 8192, 00:15:32.403 "large_bufsize": 135168, 00:15:32.403 "enable_numa": false 00:15:32.403 } 00:15:32.403 } 00:15:32.403 ] 00:15:32.403 }, 00:15:32.403 { 00:15:32.403 "subsystem": "sock", 00:15:32.403 "config": [ 00:15:32.403 { 00:15:32.403 "method": "sock_set_default_impl", 00:15:32.403 "params": { 00:15:32.403 "impl_name": "uring" 00:15:32.403 } 00:15:32.403 }, 00:15:32.404 { 00:15:32.404 "method": "sock_impl_set_options", 00:15:32.404 "params": { 00:15:32.404 "impl_name": "ssl", 00:15:32.404 "recv_buf_size": 4096, 00:15:32.404 "send_buf_size": 4096, 00:15:32.404 "enable_recv_pipe": true, 00:15:32.404 "enable_quickack": false, 00:15:32.404 "enable_placement_id": 0, 00:15:32.404 "enable_zerocopy_send_server": true, 00:15:32.404 "enable_zerocopy_send_client": false, 00:15:32.404 "zerocopy_threshold": 0, 00:15:32.404 "tls_version": 0, 00:15:32.404 "enable_ktls": false 00:15:32.404 } 00:15:32.404 }, 00:15:32.404 { 00:15:32.404 "method": "sock_impl_set_options", 00:15:32.404 "params": { 00:15:32.404 "impl_name": "posix", 00:15:32.404 "recv_buf_size": 2097152, 00:15:32.404 "send_buf_size": 2097152, 00:15:32.404 "enable_recv_pipe": true, 00:15:32.404 "enable_quickack": false, 00:15:32.404 "enable_placement_id": 0, 00:15:32.404 "enable_zerocopy_send_server": true, 00:15:32.404 "enable_zerocopy_send_client": false, 00:15:32.404 "zerocopy_threshold": 0, 00:15:32.404 "tls_version": 0, 00:15:32.404 "enable_ktls": false 00:15:32.404 } 00:15:32.404 }, 00:15:32.404 { 00:15:32.404 "method": "sock_impl_set_options", 00:15:32.404 "params": { 00:15:32.404 "impl_name": "uring", 00:15:32.404 "recv_buf_size": 2097152, 00:15:32.404 "send_buf_size": 2097152, 00:15:32.404 "enable_recv_pipe": true, 00:15:32.404 "enable_quickack": false, 00:15:32.404 "enable_placement_id": 0, 00:15:32.404 "enable_zerocopy_send_server": false, 00:15:32.404 "enable_zerocopy_send_client": false, 00:15:32.404 "zerocopy_threshold": 0, 00:15:32.404 "tls_version": 0, 00:15:32.404 "enable_ktls": false 00:15:32.404 } 00:15:32.404 } 00:15:32.404 ] 00:15:32.404 }, 00:15:32.404 { 00:15:32.404 "subsystem": "vmd", 00:15:32.404 "config": [] 00:15:32.404 }, 00:15:32.404 { 00:15:32.404 "subsystem": "accel", 00:15:32.404 "config": [ 00:15:32.404 { 00:15:32.404 "method": "accel_set_options", 00:15:32.404 "params": { 00:15:32.404 "small_cache_size": 128, 00:15:32.404 "large_cache_size": 16, 00:15:32.404 "task_count": 2048, 00:15:32.404 "sequence_count": 2048, 00:15:32.404 "buf_count": 2048 00:15:32.404 } 00:15:32.404 } 00:15:32.404 ] 00:15:32.404 }, 00:15:32.404 { 00:15:32.404 "subsystem": "bdev", 00:15:32.404 "config": [ 00:15:32.404 { 00:15:32.404 "method": "bdev_set_options", 00:15:32.404 "params": { 00:15:32.404 "bdev_io_pool_size": 65535, 00:15:32.404 "bdev_io_cache_size": 256, 00:15:32.404 "bdev_auto_examine": true, 00:15:32.404 "iobuf_small_cache_size": 128, 00:15:32.404 "iobuf_large_cache_size": 16 00:15:32.404 } 00:15:32.404 }, 00:15:32.404 { 00:15:32.404 "method": "bdev_raid_set_options", 00:15:32.404 "params": { 00:15:32.404 "process_window_size_kb": 1024, 00:15:32.404 "process_max_bandwidth_mb_sec": 0 00:15:32.404 } 00:15:32.404 }, 00:15:32.404 { 00:15:32.404 "method": "bdev_iscsi_set_options", 00:15:32.404 "params": { 00:15:32.404 "timeout_sec": 30 00:15:32.404 } 00:15:32.404 }, 00:15:32.404 { 00:15:32.404 "method": "bdev_nvme_set_options", 00:15:32.404 "params": { 00:15:32.404 "action_on_timeout": "none", 00:15:32.404 "timeout_us": 0, 00:15:32.404 "timeout_admin_us": 0, 00:15:32.404 "keep_alive_timeout_ms": 10000, 00:15:32.404 "arbitration_burst": 0, 00:15:32.404 "low_priority_weight": 0, 00:15:32.404 "medium_priority_weight": 0, 00:15:32.404 "high_priority_weight": 0, 00:15:32.404 "nvme_adminq_poll_period_us": 10000, 00:15:32.404 "nvme_ioq_poll_period_us": 0, 00:15:32.404 "io_queue_requests": 0, 00:15:32.404 "delay_cmd_submit": true, 00:15:32.404 "transport_retry_count": 4, 00:15:32.404 "bdev_retry_count": 3, 00:15:32.404 "transport_ack_timeout": 0, 00:15:32.404 "ctrlr_loss_timeout_sec": 0, 00:15:32.404 "reconnect_delay_sec": 0, 00:15:32.404 "fast_io_fail_timeout_sec": 0, 00:15:32.404 "disable_auto_failback": false, 00:15:32.404 "generate_uuids": false, 00:15:32.404 "transport_tos": 0, 00:15:32.404 "nvme_error_stat": false, 00:15:32.404 "rdma_srq_size": 0, 00:15:32.404 "io_path_stat": false, 00:15:32.404 "allow_accel_sequence": false, 00:15:32.404 "rdma_max_cq_size": 0, 00:15:32.404 "rdma_cm_event_timeout_ms": 0, 00:15:32.404 "dhchap_digests": [ 00:15:32.404 "sha256", 00:15:32.404 "sha384", 00:15:32.404 "sha512" 00:15:32.404 ], 00:15:32.404 "dhchap_dhgroups": [ 00:15:32.404 "null", 00:15:32.404 "ffdhe2048", 00:15:32.404 "ffdhe3072", 00:15:32.404 "ffdhe4096", 00:15:32.404 "ffdhe6144", 00:15:32.404 "ffdhe8192" 00:15:32.404 ] 00:15:32.404 } 00:15:32.404 }, 00:15:32.404 { 00:15:32.404 "method": "bdev_nvme_set_hotplug", 00:15:32.404 "params": { 00:15:32.404 "period_us": 100000, 00:15:32.404 "enable": false 00:15:32.404 } 00:15:32.404 }, 00:15:32.404 { 00:15:32.404 "method": "bdev_malloc_create", 00:15:32.404 "params": { 00:15:32.404 "name": "malloc0", 00:15:32.404 "num_blocks": 8192, 00:15:32.404 "block_size": 4096, 00:15:32.404 "physical_block_size": 4096, 00:15:32.404 "uuid": "d3caf4b6-406e-495c-8034-816cfc4bc004", 00:15:32.404 "optimal_io_boundary": 0, 00:15:32.404 "md_size": 0, 00:15:32.404 "dif_type": 0, 00:15:32.404 "dif_is_head_of_md": false, 00:15:32.404 "dif_pi_format": 0 00:15:32.404 } 00:15:32.404 }, 00:15:32.404 { 00:15:32.404 "method": "bdev_wait_for_examine" 00:15:32.404 } 00:15:32.404 ] 00:15:32.404 }, 00:15:32.404 { 00:15:32.404 "subsystem": "nbd", 00:15:32.404 "config": [] 00:15:32.404 }, 00:15:32.404 { 00:15:32.404 "subsystem": "scheduler", 00:15:32.404 "config": [ 00:15:32.404 { 00:15:32.404 "method": "framework_set_scheduler", 00:15:32.404 "params": { 00:15:32.404 "name": "static" 00:15:32.404 } 00:15:32.404 } 00:15:32.404 ] 00:15:32.404 }, 00:15:32.404 { 00:15:32.404 "subsystem": "nvmf", 00:15:32.404 "config": [ 00:15:32.404 { 00:15:32.404 "method": "nvmf_set_config", 00:15:32.404 "params": { 00:15:32.404 "discovery_filter": "match_any", 00:15:32.404 "admin_cmd_passthru": { 00:15:32.404 "identify_ctrlr": false 00:15:32.404 }, 00:15:32.404 "dhchap_digests": [ 00:15:32.404 "sha256", 00:15:32.404 "sha384", 00:15:32.404 "sha512" 00:15:32.404 ], 00:15:32.404 "dhchap_dhgroups": [ 00:15:32.404 "null", 00:15:32.404 "ffdhe2048", 00:15:32.404 "ffdhe3072", 00:15:32.404 "ffdhe4096", 00:15:32.404 "ffdhe6144", 00:15:32.404 "ffdhe8192" 00:15:32.404 ] 00:15:32.405 } 00:15:32.405 }, 00:15:32.405 { 00:15:32.405 "method": "nvmf_set_max_subsystems", 00:15:32.405 "params": { 00:15:32.405 "max_subsystems": 1024 00:15:32.405 } 00:15:32.405 }, 00:15:32.405 { 00:15:32.405 "method": "nvmf_set_crdt", 00:15:32.405 "params": { 00:15:32.405 "crdt1": 0, 00:15:32.405 "crdt2": 0, 00:15:32.405 "crdt3": 0 00:15:32.405 } 00:15:32.405 }, 00:15:32.405 { 00:15:32.405 "method": "nvmf_create_transport", 00:15:32.405 "params": { 00:15:32.405 "trtype": "TCP", 00:15:32.405 "max_queue_depth": 128, 00:15:32.405 "max_io_qpairs_per_ctrlr": 127, 00:15:32.405 "in_capsule_data_size": 4096, 00:15:32.405 "max_io_size": 131072, 00:15:32.405 "io_unit_size": 131072, 00:15:32.405 "max_aq_depth": 128, 00:15:32.405 "num_shared_buffers": 511, 00:15:32.405 "buf_cache_size": 4294967295, 00:15:32.405 "dif_insert_or_strip": false, 00:15:32.405 "zcopy": false, 00:15:32.405 "c2h_success": false, 00:15:32.405 "sock_priority": 0, 00:15:32.405 "abort_timeout_sec": 1, 00:15:32.405 "ack_timeout": 0, 00:15:32.405 "data_wr_pool_size": 0 00:15:32.405 } 00:15:32.405 }, 00:15:32.405 { 00:15:32.405 "method": "nvmf_create_subsystem", 00:15:32.405 "params": { 00:15:32.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.405 "allow_any_host": false, 00:15:32.405 "serial_number": "00000000000000000000", 00:15:32.405 "model_number": "SPDK bdev Controller", 00:15:32.405 "max_namespaces": 32, 00:15:32.405 "min_cntlid": 1, 00:15:32.405 "max_cntlid": 65519, 00:15:32.405 "ana_reporting": false 00:15:32.405 } 00:15:32.405 }, 00:15:32.405 { 00:15:32.405 "method": "nvmf_subsystem_add_host", 00:15:32.405 "params": { 00:15:32.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.405 "host": "nqn.2016-06.io.spdk:host1", 00:15:32.405 "psk": "key0" 00:15:32.405 } 00:15:32.405 }, 00:15:32.405 { 00:15:32.405 "method": "nvmf_subsystem_add_ns", 00:15:32.405 "params": { 00:15:32.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.405 "namespace": { 00:15:32.405 "nsid": 1, 00:15:32.405 "bdev_name": "malloc0", 00:15:32.405 "nguid": "D3CAF4B6406E495C8034816CFC4BC004", 00:15:32.405 "uuid": "d3caf4b6-406e-495c-8034-816cfc4bc004", 00:15:32.405 "no_auto_visible": false 00:15:32.405 } 00:15:32.405 } 00:15:32.405 }, 00:15:32.405 { 00:15:32.405 "method": "nvmf_subsystem_add_listener", 00:15:32.405 "params": { 00:15:32.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.405 "listen_address": { 00:15:32.405 "trtype": "TCP", 00:15:32.405 "adrfam": "IPv4", 00:15:32.405 "traddr": "10.0.0.3", 00:15:32.405 "trsvcid": "4420" 00:15:32.405 }, 00:15:32.405 "secure_channel": false, 00:15:32.405 "sock_impl": "ssl" 00:15:32.405 } 00:15:32.405 } 00:15:32.405 ] 00:15:32.405 } 00:15:32.405 ] 00:15:32.405 }' 00:15:32.405 16:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:32.664 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:32.664 "subsystems": [ 00:15:32.664 { 00:15:32.664 "subsystem": "keyring", 00:15:32.664 "config": [ 00:15:32.664 { 00:15:32.664 "method": "keyring_file_add_key", 00:15:32.664 "params": { 00:15:32.664 "name": "key0", 00:15:32.664 "path": "/tmp/tmp.dyeRk31OMC" 00:15:32.664 } 00:15:32.664 } 00:15:32.664 ] 00:15:32.664 }, 00:15:32.664 { 00:15:32.664 "subsystem": "iobuf", 00:15:32.664 "config": [ 00:15:32.664 { 00:15:32.664 "method": "iobuf_set_options", 00:15:32.664 "params": { 00:15:32.664 "small_pool_count": 8192, 00:15:32.664 "large_pool_count": 1024, 00:15:32.664 "small_bufsize": 8192, 00:15:32.664 "large_bufsize": 135168, 00:15:32.664 "enable_numa": false 00:15:32.664 } 00:15:32.664 } 00:15:32.664 ] 00:15:32.664 }, 00:15:32.664 { 00:15:32.664 "subsystem": "sock", 00:15:32.664 "config": [ 00:15:32.664 { 00:15:32.664 "method": "sock_set_default_impl", 00:15:32.664 "params": { 00:15:32.664 "impl_name": "uring" 00:15:32.664 } 00:15:32.664 }, 00:15:32.664 { 00:15:32.664 "method": "sock_impl_set_options", 00:15:32.664 "params": { 00:15:32.664 "impl_name": "ssl", 00:15:32.664 "recv_buf_size": 4096, 00:15:32.664 "send_buf_size": 4096, 00:15:32.664 "enable_recv_pipe": true, 00:15:32.664 "enable_quickack": false, 00:15:32.664 "enable_placement_id": 0, 00:15:32.664 "enable_zerocopy_send_server": true, 00:15:32.665 "enable_zerocopy_send_client": false, 00:15:32.665 "zerocopy_threshold": 0, 00:15:32.665 "tls_version": 0, 00:15:32.665 "enable_ktls": false 00:15:32.665 } 00:15:32.665 }, 00:15:32.665 { 00:15:32.665 "method": "sock_impl_set_options", 00:15:32.665 "params": { 00:15:32.665 "impl_name": "posix", 00:15:32.665 "recv_buf_size": 2097152, 00:15:32.665 "send_buf_size": 2097152, 00:15:32.665 "enable_recv_pipe": true, 00:15:32.665 "enable_quickack": false, 00:15:32.665 "enable_placement_id": 0, 00:15:32.665 "enable_zerocopy_send_server": true, 00:15:32.665 "enable_zerocopy_send_client": false, 00:15:32.665 "zerocopy_threshold": 0, 00:15:32.665 "tls_version": 0, 00:15:32.665 "enable_ktls": false 00:15:32.665 } 00:15:32.665 }, 00:15:32.665 { 00:15:32.665 "method": "sock_impl_set_options", 00:15:32.665 "params": { 00:15:32.665 "impl_name": "uring", 00:15:32.665 "recv_buf_size": 2097152, 00:15:32.665 "send_buf_size": 2097152, 00:15:32.665 "enable_recv_pipe": true, 00:15:32.665 "enable_quickack": false, 00:15:32.665 "enable_placement_id": 0, 00:15:32.665 "enable_zerocopy_send_server": false, 00:15:32.665 "enable_zerocopy_send_client": false, 00:15:32.665 "zerocopy_threshold": 0, 00:15:32.665 "tls_version": 0, 00:15:32.665 "enable_ktls": false 00:15:32.665 } 00:15:32.665 } 00:15:32.665 ] 00:15:32.665 }, 00:15:32.665 { 00:15:32.665 "subsystem": "vmd", 00:15:32.665 "config": [] 00:15:32.665 }, 00:15:32.665 { 00:15:32.665 "subsystem": "accel", 00:15:32.665 "config": [ 00:15:32.665 { 00:15:32.665 "method": "accel_set_options", 00:15:32.665 "params": { 00:15:32.665 "small_cache_size": 128, 00:15:32.665 "large_cache_size": 16, 00:15:32.665 "task_count": 2048, 00:15:32.665 "sequence_count": 2048, 00:15:32.665 "buf_count": 2048 00:15:32.665 } 00:15:32.665 } 00:15:32.665 ] 00:15:32.665 }, 00:15:32.665 { 00:15:32.665 "subsystem": "bdev", 00:15:32.665 "config": [ 00:15:32.665 { 00:15:32.665 "method": "bdev_set_options", 00:15:32.665 "params": { 00:15:32.665 "bdev_io_pool_size": 65535, 00:15:32.665 "bdev_io_cache_size": 256, 00:15:32.665 "bdev_auto_examine": true, 00:15:32.665 "iobuf_small_cache_size": 128, 00:15:32.665 "iobuf_large_cache_size": 16 00:15:32.665 } 00:15:32.665 }, 00:15:32.665 { 00:15:32.665 "method": "bdev_raid_set_options", 00:15:32.665 "params": { 00:15:32.665 "process_window_size_kb": 1024, 00:15:32.665 "process_max_bandwidth_mb_sec": 0 00:15:32.665 } 00:15:32.665 }, 00:15:32.665 { 00:15:32.665 "method": "bdev_iscsi_set_options", 00:15:32.665 "params": { 00:15:32.665 "timeout_sec": 30 00:15:32.665 } 00:15:32.665 }, 00:15:32.665 { 00:15:32.665 "method": "bdev_nvme_set_options", 00:15:32.665 "params": { 00:15:32.665 "action_on_timeout": "none", 00:15:32.665 "timeout_us": 0, 00:15:32.665 "timeout_admin_us": 0, 00:15:32.665 "keep_alive_timeout_ms": 10000, 00:15:32.665 "arbitration_burst": 0, 00:15:32.665 "low_priority_weight": 0, 00:15:32.665 "medium_priority_weight": 0, 00:15:32.665 "high_priority_weight": 0, 00:15:32.665 "nvme_adminq_poll_period_us": 10000, 00:15:32.665 "nvme_ioq_poll_period_us": 0, 00:15:32.665 "io_queue_requests": 512, 00:15:32.665 "delay_cmd_submit": true, 00:15:32.665 "transport_retry_count": 4, 00:15:32.665 "bdev_retry_count": 3, 00:15:32.665 "transport_ack_timeout": 0, 00:15:32.665 "ctrlr_loss_timeout_sec": 0, 00:15:32.665 "reconnect_delay_sec": 0, 00:15:32.665 "fast_io_fail_timeout_sec": 0, 00:15:32.665 "disable_auto_failback": false, 00:15:32.665 "generate_uuids": false, 00:15:32.665 "transport_tos": 0, 00:15:32.665 "nvme_error_stat": false, 00:15:32.665 "rdma_srq_size": 0, 00:15:32.665 "io_path_stat": false, 00:15:32.665 "allow_accel_sequence": false, 00:15:32.665 "rdma_max_cq_size": 0, 00:15:32.665 "rdma_cm_event_timeout_ms": 0, 00:15:32.665 "dhchap_digests": [ 00:15:32.665 "sha256", 00:15:32.665 "sha384", 00:15:32.665 "sha512" 00:15:32.665 ], 00:15:32.665 "dhchap_dhgroups": [ 00:15:32.665 "null", 00:15:32.665 "ffdhe2048", 00:15:32.665 "ffdhe3072", 00:15:32.665 "ffdhe4096", 00:15:32.665 "ffdhe6144", 00:15:32.665 "ffdhe8192" 00:15:32.665 ] 00:15:32.665 } 00:15:32.665 }, 00:15:32.665 { 00:15:32.665 "method": "bdev_nvme_attach_controller", 00:15:32.665 "params": { 00:15:32.665 "name": "nvme0", 00:15:32.665 "trtype": "TCP", 00:15:32.665 "adrfam": "IPv4", 00:15:32.665 "traddr": "10.0.0.3", 00:15:32.665 "trsvcid": "4420", 00:15:32.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.665 "prchk_reftag": false, 00:15:32.665 "prchk_guard": false, 00:15:32.665 "ctrlr_loss_timeout_sec": 0, 00:15:32.665 "reconnect_delay_sec": 0, 00:15:32.665 "fast_io_fail_timeout_sec": 0, 00:15:32.665 "psk": "key0", 00:15:32.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:32.665 "hdgst": false, 00:15:32.665 "ddgst": false, 00:15:32.665 "multipath": "multipath" 00:15:32.665 } 00:15:32.665 }, 00:15:32.665 { 00:15:32.665 "method": "bdev_nvme_set_hotplug", 00:15:32.665 "params": { 00:15:32.665 "period_us": 100000, 00:15:32.665 "enable": false 00:15:32.665 } 00:15:32.665 }, 00:15:32.665 { 00:15:32.665 "method": "bdev_enable_histogram", 00:15:32.665 "params": { 00:15:32.665 "name": "nvme0n1", 00:15:32.665 "enable": true 00:15:32.665 } 00:15:32.665 }, 00:15:32.665 { 00:15:32.665 "method": "bdev_wait_for_examine" 00:15:32.665 } 00:15:32.665 ] 00:15:32.666 }, 00:15:32.666 { 00:15:32.666 "subsystem": "nbd", 00:15:32.666 "config": [] 00:15:32.666 } 00:15:32.666 ] 00:15:32.666 }' 00:15:32.666 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84130 00:15:32.666 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84130 ']' 00:15:32.666 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84130 00:15:32.666 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:32.666 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.666 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84130 00:15:32.666 killing process with pid 84130 00:15:32.666 Received shutdown signal, test time was about 1.000000 seconds 00:15:32.666 00:15:32.666 Latency(us) 00:15:32.666 [2024-11-26T16:20:58.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.666 [2024-11-26T16:20:58.319Z] =================================================================================================================== 00:15:32.666 [2024-11-26T16:20:58.319Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:32.666 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:32.666 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:32.666 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84130' 00:15:32.666 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84130 00:15:32.666 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84130 00:15:32.925 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84105 00:15:32.925 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84105 ']' 00:15:32.925 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84105 00:15:32.925 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:32.925 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.925 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84105 00:15:32.925 killing process with pid 84105 00:15:32.925 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.925 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.925 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84105' 00:15:32.925 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84105 00:15:32.925 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84105 00:15:33.185 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:33.185 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:33.185 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:33.185 "subsystems": [ 00:15:33.185 { 00:15:33.185 "subsystem": "keyring", 00:15:33.185 "config": [ 00:15:33.185 { 00:15:33.185 "method": "keyring_file_add_key", 00:15:33.185 "params": { 00:15:33.185 "name": "key0", 00:15:33.185 "path": "/tmp/tmp.dyeRk31OMC" 00:15:33.185 } 00:15:33.185 } 00:15:33.185 ] 00:15:33.185 }, 00:15:33.185 { 00:15:33.185 "subsystem": "iobuf", 00:15:33.185 "config": [ 00:15:33.185 { 00:15:33.185 "method": "iobuf_set_options", 00:15:33.185 "params": { 00:15:33.185 "small_pool_count": 8192, 00:15:33.185 "large_pool_count": 1024, 00:15:33.185 "small_bufsize": 8192, 00:15:33.185 "large_bufsize": 135168, 00:15:33.185 "enable_numa": false 00:15:33.185 } 00:15:33.185 } 00:15:33.185 ] 00:15:33.185 }, 00:15:33.185 { 00:15:33.185 "subsystem": "sock", 00:15:33.185 "config": [ 00:15:33.185 { 00:15:33.185 "method": "sock_set_default_impl", 00:15:33.185 "params": { 00:15:33.185 "impl_name": "uring" 00:15:33.185 } 00:15:33.185 }, 00:15:33.185 { 00:15:33.185 "method": "sock_impl_set_options", 00:15:33.185 "params": { 00:15:33.185 "impl_name": "ssl", 00:15:33.185 "recv_buf_size": 4096, 00:15:33.185 "send_buf_size": 4096, 00:15:33.185 "enable_recv_pipe": true, 00:15:33.185 "enable_quickack": false, 00:15:33.185 "enable_placement_id": 0, 00:15:33.185 "enable_zerocopy_send_server": true, 00:15:33.185 "enable_zerocopy_send_client": false, 00:15:33.185 "zerocopy_threshold": 0, 00:15:33.185 "tls_version": 0, 00:15:33.185 "enable_ktls": false 00:15:33.185 } 00:15:33.185 }, 00:15:33.185 { 00:15:33.185 "method": "sock_impl_set_options", 00:15:33.185 "params": { 00:15:33.185 "impl_name": "posix", 00:15:33.185 "recv_buf_size": 2097152, 00:15:33.185 "send_buf_size": 2097152, 00:15:33.185 "enable_recv_pipe": true, 00:15:33.185 "enable_quickack": false, 00:15:33.185 "enable_placement_id": 0, 00:15:33.185 "enable_zerocopy_send_server": true, 00:15:33.185 "enable_zerocopy_send_client": false, 00:15:33.185 "zerocopy_threshold": 0, 00:15:33.185 "tls_version": 0, 00:15:33.185 "enable_ktls": false 00:15:33.185 } 00:15:33.185 }, 00:15:33.185 { 00:15:33.185 "method": "sock_impl_set_options", 00:15:33.185 "params": { 00:15:33.185 "impl_name": "uring", 00:15:33.185 "recv_buf_size": 2097152, 00:15:33.185 "send_buf_size": 2097152, 00:15:33.185 "enable_recv_pipe": true, 00:15:33.185 "enable_quickack": false, 00:15:33.185 "enable_placement_id": 0, 00:15:33.185 "enable_zerocopy_send_server": false, 00:15:33.185 "enable_zerocopy_send_client": false, 00:15:33.185 "zerocopy_threshold": 0, 00:15:33.185 "tls_version": 0, 00:15:33.185 "enable_ktls": false 00:15:33.185 } 00:15:33.185 } 00:15:33.185 ] 00:15:33.185 }, 00:15:33.185 { 00:15:33.185 "subsystem": "vmd", 00:15:33.185 "config": [] 00:15:33.185 }, 00:15:33.185 { 00:15:33.185 "subsystem": "accel", 00:15:33.185 "config": [ 00:15:33.185 { 00:15:33.185 "method": "accel_set_options", 00:15:33.185 "params": { 00:15:33.185 "small_cache_size": 128, 00:15:33.185 "large_cache_size": 16, 00:15:33.185 "task_count": 2048, 00:15:33.185 "sequence_count": 2048, 00:15:33.185 "buf_count": 2048 00:15:33.185 } 00:15:33.185 } 00:15:33.185 ] 00:15:33.185 }, 00:15:33.185 { 00:15:33.185 "subsystem": "bdev", 00:15:33.185 "config": [ 00:15:33.185 { 00:15:33.185 "method": "bdev_set_options", 00:15:33.185 "params": { 00:15:33.185 "bdev_io_pool_size": 65535, 00:15:33.185 "bdev_io_cache_size": 256, 00:15:33.185 "bdev_auto_examine": true, 00:15:33.185 "iobuf_small_cache_size": 128, 00:15:33.185 "iobuf_large_cache_size": 16 00:15:33.186 } 00:15:33.186 }, 00:15:33.186 { 00:15:33.186 "method": "bdev_raid_set_options", 00:15:33.186 "params": { 00:15:33.186 "process_window_size_kb": 1024, 00:15:33.186 "process_max_bandwidth_mb_sec": 0 00:15:33.186 } 00:15:33.186 }, 00:15:33.186 { 00:15:33.186 "method": "bdev_iscsi_set_options", 00:15:33.186 "params": { 00:15:33.186 "timeout_sec": 30 00:15:33.186 } 00:15:33.186 }, 00:15:33.186 { 00:15:33.186 "method": "bdev_nvme_set_options", 00:15:33.186 "params": { 00:15:33.186 "action_on_timeout": "none", 00:15:33.186 "timeout_us": 0, 00:15:33.186 "timeout_admin_us": 0, 00:15:33.186 "keep_alive_timeout_ms": 10000, 00:15:33.186 "arbitration_burst": 0, 00:15:33.186 "low_priority_weight": 0, 00:15:33.186 "medium_priority_weight": 0, 00:15:33.186 "high_priority_weight": 0, 00:15:33.186 "nvme_adminq_poll_period_us": 10000, 00:15:33.186 "nvme_ioq_poll_period_us": 0, 00:15:33.186 "io_queue_requests": 0, 00:15:33.186 "delay_cmd_submit": true, 00:15:33.186 "transport_retry_count": 4, 00:15:33.186 "bdev_retry_count": 3, 00:15:33.186 "transport_ack_timeout": 0, 00:15:33.186 "ctrlr_loss_timeout_sec": 0, 00:15:33.186 "reconnect_delay_sec": 0, 00:15:33.186 "fast_io_fail_timeout_sec": 0, 00:15:33.186 "disable_auto_failback": false, 00:15:33.186 "generate_uuids": false, 00:15:33.186 "transport_tos": 0, 00:15:33.186 "nvme_error_stat": false, 00:15:33.186 "rdma_srq_size": 0, 00:15:33.186 "io_path_stat": false, 00:15:33.186 "allow_accel_sequence": false, 00:15:33.186 "rdma_max_cq_size": 0, 00:15:33.186 "rdma_cm_event_timeout_ms": 0, 00:15:33.186 "dhchap_digests": [ 00:15:33.186 "sha256", 00:15:33.186 "sha384", 00:15:33.186 "sha512" 00:15:33.186 ], 00:15:33.186 "dhchap_dhgroups": [ 00:15:33.186 "null", 00:15:33.186 "ffdhe2048", 00:15:33.186 "ffdhe3072", 00:15:33.186 "ffdhe4096", 00:15:33.186 "ffdhe6144", 00:15:33.186 "ffdhe8192" 00:15:33.186 ] 00:15:33.186 } 00:15:33.186 }, 00:15:33.186 { 00:15:33.186 "method": "bdev_nvme_set_hotplug", 00:15:33.186 "params": { 00:15:33.186 "period_us": 100000, 00:15:33.186 "enable": false 00:15:33.186 } 00:15:33.186 }, 00:15:33.186 { 00:15:33.186 "method": "bdev_malloc_create", 00:15:33.186 "params": { 00:15:33.186 "name": "malloc0", 00:15:33.186 "num_blocks": 8192, 00:15:33.186 "block_size": 4096, 00:15:33.186 "physical_block_size": 4096, 00:15:33.186 "uuid": "d3caf4b6-406e-495c-8034-816cfc4bc004", 00:15:33.186 "optimal_io_boundary": 0, 00:15:33.186 "md_size": 0, 00:15:33.186 "dif_type": 0, 00:15:33.186 "dif_is_head_of_md": false, 00:15:33.186 "dif_pi_format": 0 00:15:33.186 } 00:15:33.186 }, 00:15:33.186 { 00:15:33.186 "method": "bdev_wait_for_examine" 00:15:33.186 } 00:15:33.186 ] 00:15:33.186 }, 00:15:33.186 { 00:15:33.186 "subsystem": "nbd", 00:15:33.186 "config": [] 00:15:33.186 }, 00:15:33.186 { 00:15:33.186 "subsystem": "scheduler", 00:15:33.186 "config": [ 00:15:33.186 { 00:15:33.186 "method": "framework_set_scheduler", 00:15:33.186 "params": { 00:15:33.186 "name": "static" 00:15:33.186 } 00:15:33.186 } 00:15:33.186 ] 00:15:33.186 }, 00:15:33.186 { 00:15:33.186 "subsystem": "nvmf", 00:15:33.186 "config": [ 00:15:33.186 { 00:15:33.186 "method": "nvmf_set_config", 00:15:33.186 "params": { 00:15:33.186 "discovery_filter": "match_any", 00:15:33.186 "admin_cmd_passthru": { 00:15:33.186 "identify_ctrlr": false 00:15:33.186 }, 00:15:33.186 "dhchap_digests": [ 00:15:33.186 "sha256", 00:15:33.186 "sha384", 00:15:33.186 "sha512" 00:15:33.186 ], 00:15:33.186 "dhchap_dhgroups": [ 00:15:33.186 "null", 00:15:33.186 "ffdhe2048", 00:15:33.186 "ffdhe3072", 00:15:33.186 "ffdhe4096", 00:15:33.186 "ffdhe6144", 00:15:33.186 "ffdhe8192" 00:15:33.186 ] 00:15:33.186 } 00:15:33.186 }, 00:15:33.186 { 00:15:33.186 "method": "nvmf_set_max_subsystems", 00:15:33.186 "params": { 00:15:33.186 "max_subsystems": 1024 00:15:33.186 } 00:15:33.186 }, 00:15:33.186 { 00:15:33.186 "method": "nvmf_set_crdt", 00:15:33.186 "params": { 00:15:33.186 "crdt1": 0, 00:15:33.186 "crdt2": 0, 00:15:33.186 "crdt3": 0 00:15:33.186 } 00:15:33.186 }, 00:15:33.186 { 00:15:33.186 "method": "nvmf_create_transport", 00:15:33.186 "params": { 00:15:33.186 "trtype": "TCP", 00:15:33.186 "max_queue_depth": 128, 00:15:33.186 "max_io_qpairs_per_ctrlr": 127, 00:15:33.186 "in_capsule_data_size": 4096, 00:15:33.186 "max_io_size": 131072, 00:15:33.186 "io_unit_size": 131072, 00:15:33.186 "max_aq_depth": 128, 00:15:33.186 "num_shared_buffers": 511, 00:15:33.186 "buf_cache_size": 4294967295, 00:15:33.186 "dif_insert_or_strip": false, 00:15:33.186 "zcopy": false, 00:15:33.186 "c2h_success": false, 00:15:33.186 "sock_priority": 0, 00:15:33.186 "abort_timeout_sec": 1, 00:15:33.186 "ack_timeout": 0, 00:15:33.186 "data_wr_pool_size": 0 00:15:33.186 } 00:15:33.186 }, 00:15:33.186 { 00:15:33.186 "method": "nvmf_create_subsystem", 00:15:33.186 "params": { 00:15:33.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.186 "allow_any_host": false, 00:15:33.186 "serial_number": "00000000000000000000", 00:15:33.186 "model_number": "SPDK bdev Controller", 00:15:33.186 "max_namespaces": 32, 00:15:33.186 "min_cntlid": 1, 00:15:33.186 "max_cntlid": 65519, 00:15:33.186 "ana_reporting": false 00:15:33.186 } 00:15:33.186 }, 00:15:33.186 { 00:15:33.186 "method": "nvmf_subsystem_add_host", 00:15:33.186 "params": { 00:15:33.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.186 "host": "nqn.2016-06.io.spdk:host1", 00:15:33.186 "psk": "key0" 00:15:33.186 } 00:15:33.186 }, 00:15:33.186 { 00:15:33.186 "method": "nvmf_subsystem_add_ns", 00:15:33.186 "params": { 00:15:33.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.186 "namespace": { 00:15:33.186 "nsid": 1, 00:15:33.186 "bdev_name": "malloc0", 00:15:33.186 "nguid": "D3CAF4B6406E495C8034816CFC4BC004", 00:15:33.186 "uuid": "d3caf4b6-406e-495c-8034-816cfc4bc004", 00:15:33.186 "no_auto_visible": false 00:15:33.186 } 00:15:33.187 } 00:15:33.187 }, 00:15:33.187 { 00:15:33.187 "method": "nvmf_subsystem_add_listener", 00:15:33.187 "params": { 00:15:33.187 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.187 "listen_address": { 00:15:33.187 "trtype": "TCP", 00:15:33.187 "adrfam": "IPv4", 00:15:33.187 "traddr": "10.0.0.3", 00:15:33.187 "trsvcid": "4420" 00:15:33.187 }, 00:15:33.187 "secure_channel": false, 00:15:33.187 "sock_impl": "ssl" 00:15:33.187 } 00:15:33.187 } 00:15:33.187 ] 00:15:33.187 } 00:15:33.187 ] 00:15:33.187 }' 00:15:33.187 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.187 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.187 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84183 00:15:33.187 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84183 00:15:33.187 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:33.187 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84183 ']' 00:15:33.187 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.187 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.187 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.187 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.187 16:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.187 [2024-11-26 16:20:58.668759] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:15:33.187 [2024-11-26 16:20:58.669198] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.187 [2024-11-26 16:20:58.820398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.446 [2024-11-26 16:20:58.841660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.446 [2024-11-26 16:20:58.841948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.446 [2024-11-26 16:20:58.842164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.446 [2024-11-26 16:20:58.842276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.446 [2024-11-26 16:20:58.842292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.446 [2024-11-26 16:20:58.842666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.446 [2024-11-26 16:20:58.985613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.446 [2024-11-26 16:20:59.041199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.446 [2024-11-26 16:20:59.073137] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:33.446 [2024-11-26 16:20:59.073337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:34.013 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.013 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:34.013 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:34.013 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:34.013 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:34.273 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.273 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84215 00:15:34.273 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84215 /var/tmp/bdevperf.sock 00:15:34.273 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84215 ']' 00:15:34.273 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:34.273 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.273 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:34.273 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.273 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.273 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:34.273 16:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:34.273 "subsystems": [ 00:15:34.273 { 00:15:34.273 "subsystem": "keyring", 00:15:34.273 "config": [ 00:15:34.273 { 00:15:34.273 "method": "keyring_file_add_key", 00:15:34.273 "params": { 00:15:34.273 "name": "key0", 00:15:34.273 "path": "/tmp/tmp.dyeRk31OMC" 00:15:34.273 } 00:15:34.273 } 00:15:34.273 ] 00:15:34.273 }, 00:15:34.273 { 00:15:34.273 "subsystem": "iobuf", 00:15:34.273 "config": [ 00:15:34.273 { 00:15:34.273 "method": "iobuf_set_options", 00:15:34.273 "params": { 00:15:34.273 "small_pool_count": 8192, 00:15:34.273 "large_pool_count": 1024, 00:15:34.273 "small_bufsize": 8192, 00:15:34.273 "large_bufsize": 135168, 00:15:34.273 "enable_numa": false 00:15:34.273 } 00:15:34.273 } 00:15:34.273 ] 00:15:34.273 }, 00:15:34.273 { 00:15:34.273 "subsystem": "sock", 00:15:34.273 "config": [ 00:15:34.273 { 00:15:34.273 "method": "sock_set_default_impl", 00:15:34.273 "params": { 00:15:34.273 "impl_name": "uring" 00:15:34.273 } 00:15:34.273 }, 00:15:34.273 { 00:15:34.273 "method": "sock_impl_set_options", 00:15:34.273 "params": { 00:15:34.273 "impl_name": "ssl", 00:15:34.273 "recv_buf_size": 4096, 00:15:34.273 "send_buf_size": 4096, 00:15:34.273 "enable_recv_pipe": true, 00:15:34.273 "enable_quickack": false, 00:15:34.273 "enable_placement_id": 0, 00:15:34.273 "enable_zerocopy_send_server": true, 00:15:34.273 "enable_zerocopy_send_client": false, 00:15:34.273 "zerocopy_threshold": 0, 00:15:34.273 "tls_version": 0, 00:15:34.273 "enable_ktls": false 00:15:34.273 } 00:15:34.273 }, 00:15:34.273 { 00:15:34.273 "method": "sock_impl_set_options", 00:15:34.273 "params": { 00:15:34.273 "impl_name": "posix", 00:15:34.273 "recv_buf_size": 2097152, 00:15:34.273 "send_buf_size": 2097152, 00:15:34.273 "enable_recv_pipe": true, 00:15:34.273 "enable_quickack": false, 00:15:34.273 "enable_placement_id": 0, 00:15:34.273 "enable_zerocopy_send_server": true, 00:15:34.273 "enable_zerocopy_send_client": false, 00:15:34.273 "zerocopy_threshold": 0, 00:15:34.273 "tls_version": 0, 00:15:34.273 "enable_ktls": false 00:15:34.273 } 00:15:34.273 }, 00:15:34.273 { 00:15:34.273 "method": "sock_impl_set_options", 00:15:34.273 "params": { 00:15:34.273 "impl_name": "uring", 00:15:34.273 "recv_buf_size": 2097152, 00:15:34.273 "send_buf_size": 2097152, 00:15:34.273 "enable_recv_pipe": true, 00:15:34.273 "enable_quickack": false, 00:15:34.273 "enable_placement_id": 0, 00:15:34.273 "enable_zerocopy_send_server": false, 00:15:34.273 "enable_zerocopy_send_client": false, 00:15:34.273 "zerocopy_threshold": 0, 00:15:34.273 "tls_version": 0, 00:15:34.273 "enable_ktls": false 00:15:34.273 } 00:15:34.273 } 00:15:34.273 ] 00:15:34.273 }, 00:15:34.273 { 00:15:34.273 "subsystem": "vmd", 00:15:34.273 "config": [] 00:15:34.273 }, 00:15:34.273 { 00:15:34.273 "subsystem": "accel", 00:15:34.273 "config": [ 00:15:34.273 { 00:15:34.273 "method": "accel_set_options", 00:15:34.273 "params": { 00:15:34.273 "small_cache_size": 128, 00:15:34.273 "large_cache_size": 16, 00:15:34.273 "task_count": 2048, 00:15:34.273 "sequence_count": 2048, 00:15:34.273 "buf_count": 2048 00:15:34.273 } 00:15:34.273 } 00:15:34.273 ] 00:15:34.273 }, 00:15:34.273 { 00:15:34.273 "subsystem": "bdev", 00:15:34.273 "config": [ 00:15:34.273 { 00:15:34.273 "method": "bdev_set_options", 00:15:34.273 "params": { 00:15:34.273 "bdev_io_pool_size": 65535, 00:15:34.273 "bdev_io_cache_size": 256, 00:15:34.273 "bdev_auto_examine": true, 00:15:34.273 "iobuf_small_cache_size": 128, 00:15:34.273 "iobuf_large_cache_size": 16 00:15:34.273 } 00:15:34.273 }, 00:15:34.273 { 00:15:34.273 "method": "bdev_raid_set_options", 00:15:34.273 "params": { 00:15:34.273 "process_window_size_kb": 1024, 00:15:34.273 "process_max_bandwidth_mb_sec": 0 00:15:34.273 } 00:15:34.273 }, 00:15:34.273 { 00:15:34.273 "method": "bdev_iscsi_set_options", 00:15:34.273 "params": { 00:15:34.273 "timeout_sec": 30 00:15:34.273 } 00:15:34.273 }, 00:15:34.273 { 00:15:34.273 "method": "bdev_nvme_set_options", 00:15:34.273 "params": { 00:15:34.273 "action_on_timeout": "none", 00:15:34.273 "timeout_us": 0, 00:15:34.273 "timeout_admin_us": 0, 00:15:34.273 "keep_alive_timeout_ms": 10000, 00:15:34.273 "arbitration_burst": 0, 00:15:34.273 "low_priority_weight": 0, 00:15:34.273 "medium_priority_weight": 0, 00:15:34.273 "high_priority_weight": 0, 00:15:34.273 "nvme_adminq_poll_period_us": 10000, 00:15:34.273 "nvme_ioq_poll_period_us": 0, 00:15:34.273 "io_queue_requests": 512, 00:15:34.273 "delay_cmd_submit": true, 00:15:34.273 "transport_retry_count": 4, 00:15:34.273 "bdev_retry_count": 3, 00:15:34.273 "transport_ack_timeout": 0, 00:15:34.273 "ctrlr_loss_timeout_sec": 0, 00:15:34.273 "reconnect_delay_sec": 0, 00:15:34.273 "fast_io_fail_timeout_sec": 0, 00:15:34.273 "disable_auto_failback": false, 00:15:34.273 "generate_uuids": false, 00:15:34.273 "transport_tos": 0, 00:15:34.273 "nvme_error_stat": false, 00:15:34.273 "rdma_srq_size": 0, 00:15:34.273 "io_path_stat": false, 00:15:34.273 "allow_accel_sequence": false, 00:15:34.273 "rdma_max_cq_size": 0, 00:15:34.273 "rdma_cm_event_timeout_ms": 0, 00:15:34.273 "dhchap_digests": [ 00:15:34.273 "sha256", 00:15:34.273 "sha384", 00:15:34.273 "sha512" 00:15:34.273 ], 00:15:34.273 "dhchap_dhgroups": [ 00:15:34.273 "null", 00:15:34.273 "ffdhe2048", 00:15:34.273 "ffdhe3072", 00:15:34.273 "ffdhe4096", 00:15:34.273 "ffdhe6144", 00:15:34.273 "ffdhe8192" 00:15:34.273 ] 00:15:34.273 } 00:15:34.273 }, 00:15:34.273 { 00:15:34.273 "method": "bdev_nvme_attach_controller", 00:15:34.273 "params": { 00:15:34.273 "name": "nvme0", 00:15:34.273 "trtype": "TCP", 00:15:34.273 "adrfam": "IPv4", 00:15:34.273 "traddr": "10.0.0.3", 00:15:34.273 "trsvcid": "4420", 00:15:34.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.273 "prchk_reftag": false, 00:15:34.273 "prchk_guard": false, 00:15:34.274 "ctrlr_loss_timeout_sec": 0, 00:15:34.274 "reconnect_delay_sec": 0, 00:15:34.274 "fast_io_fail_timeout_sec": 0, 00:15:34.274 "psk": "key0", 00:15:34.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:34.274 "hdgst": false, 00:15:34.274 "ddgst": false, 00:15:34.274 "multipath": "multipath" 00:15:34.274 } 00:15:34.274 }, 00:15:34.274 { 00:15:34.274 "method": "bdev_nvme_set_hotplug", 00:15:34.274 "params": { 00:15:34.274 "period_us": 100000, 00:15:34.274 "enable": false 00:15:34.274 } 00:15:34.274 }, 00:15:34.274 { 00:15:34.274 "method": "bdev_enable_histogram", 00:15:34.274 "params": { 00:15:34.274 "name": "nvme0n1", 00:15:34.274 "enable": true 00:15:34.274 } 00:15:34.274 }, 00:15:34.274 { 00:15:34.274 "method": "bdev_wait_for_examine" 00:15:34.274 } 00:15:34.274 ] 00:15:34.274 }, 00:15:34.274 { 00:15:34.274 "subsystem": "nbd", 00:15:34.274 "config": [] 00:15:34.274 } 00:15:34.274 ] 00:15:34.274 }' 00:15:34.274 [2024-11-26 16:20:59.735393] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:15:34.274 [2024-11-26 16:20:59.735490] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84215 ] 00:15:34.274 [2024-11-26 16:20:59.886779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.274 [2024-11-26 16:20:59.912041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.536 [2024-11-26 16:21:00.028813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:34.536 [2024-11-26 16:21:00.060935] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:35.151 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.151 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:35.151 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:35.151 16:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:35.409 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.409 16:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:35.668 Running I/O for 1 seconds... 00:15:36.605 3968.00 IOPS, 15.50 MiB/s 00:15:36.605 Latency(us) 00:15:36.605 [2024-11-26T16:21:02.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.606 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:36.606 Verification LBA range: start 0x0 length 0x2000 00:15:36.606 nvme0n1 : 1.03 3985.98 15.57 0.00 0.00 31756.07 11200.70 23950.43 00:15:36.606 [2024-11-26T16:21:02.259Z] =================================================================================================================== 00:15:36.606 [2024-11-26T16:21:02.259Z] Total : 3985.98 15.57 0.00 0.00 31756.07 11200.70 23950.43 00:15:36.606 { 00:15:36.606 "results": [ 00:15:36.606 { 00:15:36.606 "job": "nvme0n1", 00:15:36.606 "core_mask": "0x2", 00:15:36.606 "workload": "verify", 00:15:36.606 "status": "finished", 00:15:36.606 "verify_range": { 00:15:36.606 "start": 0, 00:15:36.606 "length": 8192 00:15:36.606 }, 00:15:36.606 "queue_depth": 128, 00:15:36.606 "io_size": 4096, 00:15:36.606 "runtime": 1.027602, 00:15:36.606 "iops": 3985.9790074367315, 00:15:36.606 "mibps": 15.570230497799733, 00:15:36.606 "io_failed": 0, 00:15:36.606 "io_timeout": 0, 00:15:36.606 "avg_latency_us": 31756.065454545456, 00:15:36.606 "min_latency_us": 11200.698181818181, 00:15:36.606 "max_latency_us": 23950.429090909092 00:15:36.606 } 00:15:36.606 ], 00:15:36.606 "core_count": 1 00:15:36.606 } 00:15:36.606 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:36.606 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:36.606 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:36.606 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:15:36.606 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:15:36.606 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:36.606 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:36.606 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:36.606 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:36.606 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:36.606 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:36.606 nvmf_trace.0 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84215 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84215 ']' 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84215 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84215 00:15:36.865 killing process with pid 84215 00:15:36.865 Received shutdown signal, test time was about 1.000000 seconds 00:15:36.865 00:15:36.865 Latency(us) 00:15:36.865 [2024-11-26T16:21:02.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.865 [2024-11-26T16:21:02.518Z] =================================================================================================================== 00:15:36.865 [2024-11-26T16:21:02.518Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84215' 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84215 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84215 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:36.865 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:36.865 rmmod nvme_tcp 00:15:37.125 rmmod nvme_fabrics 00:15:37.125 rmmod nvme_keyring 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 84183 ']' 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 84183 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84183 ']' 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84183 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84183 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84183' 00:15:37.125 killing process with pid 84183 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84183 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84183 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:37.125 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.dp17HvKQ1X /tmp/tmp.uKqeJnuq6N /tmp/tmp.dyeRk31OMC 00:15:37.385 00:15:37.385 real 1m20.539s 00:15:37.385 user 2m12.471s 00:15:37.385 sys 0m25.739s 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.385 ************************************ 00:15:37.385 END TEST nvmf_tls 00:15:37.385 ************************************ 00:15:37.385 16:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:37.645 ************************************ 00:15:37.645 START TEST nvmf_fips 00:15:37.645 ************************************ 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:37.645 * Looking for test storage... 00:15:37.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:37.645 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:37.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.646 --rc genhtml_branch_coverage=1 00:15:37.646 --rc genhtml_function_coverage=1 00:15:37.646 --rc genhtml_legend=1 00:15:37.646 --rc geninfo_all_blocks=1 00:15:37.646 --rc geninfo_unexecuted_blocks=1 00:15:37.646 00:15:37.646 ' 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:37.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.646 --rc genhtml_branch_coverage=1 00:15:37.646 --rc genhtml_function_coverage=1 00:15:37.646 --rc genhtml_legend=1 00:15:37.646 --rc geninfo_all_blocks=1 00:15:37.646 --rc geninfo_unexecuted_blocks=1 00:15:37.646 00:15:37.646 ' 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:37.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.646 --rc genhtml_branch_coverage=1 00:15:37.646 --rc genhtml_function_coverage=1 00:15:37.646 --rc genhtml_legend=1 00:15:37.646 --rc geninfo_all_blocks=1 00:15:37.646 --rc geninfo_unexecuted_blocks=1 00:15:37.646 00:15:37.646 ' 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:37.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.646 --rc genhtml_branch_coverage=1 00:15:37.646 --rc genhtml_function_coverage=1 00:15:37.646 --rc genhtml_legend=1 00:15:37.646 --rc geninfo_all_blocks=1 00:15:37.646 --rc geninfo_unexecuted_blocks=1 00:15:37.646 00:15:37.646 ' 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:37.646 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.646 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:37.907 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:15:37.908 Error setting digest 00:15:37.908 40C27902C57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:37.908 40C27902C57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:37.908 Cannot find device "nvmf_init_br" 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:37.908 Cannot find device "nvmf_init_br2" 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:37.908 Cannot find device "nvmf_tgt_br" 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:37.908 Cannot find device "nvmf_tgt_br2" 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:37.908 Cannot find device "nvmf_init_br" 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:37.908 Cannot find device "nvmf_init_br2" 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:37.908 Cannot find device "nvmf_tgt_br" 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:37.908 Cannot find device "nvmf_tgt_br2" 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:37.908 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:38.168 Cannot find device "nvmf_br" 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:38.168 Cannot find device "nvmf_init_if" 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:38.168 Cannot find device "nvmf_init_if2" 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:38.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:38.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:38.168 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:38.427 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:38.427 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:38.427 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:38.427 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:38.427 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:38.427 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:38.427 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:38.427 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:38.427 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:15:38.427 00:15:38.427 --- 10.0.0.3 ping statistics --- 00:15:38.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.427 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:38.427 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:38.427 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:38.427 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:15:38.427 00:15:38.428 --- 10.0.0.4 ping statistics --- 00:15:38.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.428 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:38.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:38.428 00:15:38.428 --- 10.0.0.1 ping statistics --- 00:15:38.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.428 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:38.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:15:38.428 00:15:38.428 --- 10.0.0.2 ping statistics --- 00:15:38.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.428 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=84525 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 84525 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 84525 ']' 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:38.428 16:21:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:38.428 [2024-11-26 16:21:03.958261] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:15:38.428 [2024-11-26 16:21:03.958370] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.686 [2024-11-26 16:21:04.113221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.687 [2024-11-26 16:21:04.137471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.687 [2024-11-26 16:21:04.137526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.687 [2024-11-26 16:21:04.137541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.687 [2024-11-26 16:21:04.137552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.687 [2024-11-26 16:21:04.137561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.687 [2024-11-26 16:21:04.137952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.687 [2024-11-26 16:21:04.173164] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:39.624 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.624 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:39.624 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:39.624 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:39.624 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:39.624 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.624 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:39.624 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:39.624 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:39.624 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.oHk 00:15:39.624 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:39.625 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.oHk 00:15:39.625 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.oHk 00:15:39.625 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.oHk 00:15:39.625 16:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:39.625 [2024-11-26 16:21:05.228528] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.625 [2024-11-26 16:21:05.244425] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:39.625 [2024-11-26 16:21:05.244609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:39.884 malloc0 00:15:39.884 16:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:39.884 16:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=84565 00:15:39.884 16:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:39.884 16:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 84565 /var/tmp/bdevperf.sock 00:15:39.884 16:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 84565 ']' 00:15:39.884 16:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:39.884 16:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.884 16:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:39.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:39.884 16:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.884 16:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:39.884 [2024-11-26 16:21:05.392726] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:15:39.884 [2024-11-26 16:21:05.392838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84565 ] 00:15:40.144 [2024-11-26 16:21:05.545760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.144 [2024-11-26 16:21:05.571081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.144 [2024-11-26 16:21:05.605778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:40.144 16:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.144 16:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:40.144 16:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.oHk 00:15:40.403 16:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:40.663 [2024-11-26 16:21:06.236908] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:40.663 TLSTESTn1 00:15:40.922 16:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:40.922 Running I/O for 10 seconds... 00:15:43.234 4050.00 IOPS, 15.82 MiB/s [2024-11-26T16:21:09.824Z] 4058.00 IOPS, 15.85 MiB/s [2024-11-26T16:21:10.761Z] 4075.33 IOPS, 15.92 MiB/s [2024-11-26T16:21:11.699Z] 4089.25 IOPS, 15.97 MiB/s [2024-11-26T16:21:12.637Z] 4091.00 IOPS, 15.98 MiB/s [2024-11-26T16:21:13.573Z] 4095.17 IOPS, 16.00 MiB/s [2024-11-26T16:21:14.524Z] 4099.14 IOPS, 16.01 MiB/s [2024-11-26T16:21:15.904Z] 4101.50 IOPS, 16.02 MiB/s [2024-11-26T16:21:16.472Z] 4101.56 IOPS, 16.02 MiB/s [2024-11-26T16:21:16.731Z] 4101.30 IOPS, 16.02 MiB/s 00:15:51.079 Latency(us) 00:15:51.079 [2024-11-26T16:21:16.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.079 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:51.079 Verification LBA range: start 0x0 length 0x2000 00:15:51.079 TLSTESTn1 : 10.02 4106.50 16.04 0.00 0.00 31108.93 7268.54 28001.75 00:15:51.079 [2024-11-26T16:21:16.732Z] =================================================================================================================== 00:15:51.079 [2024-11-26T16:21:16.732Z] Total : 4106.50 16.04 0.00 0.00 31108.93 7268.54 28001.75 00:15:51.079 { 00:15:51.079 "results": [ 00:15:51.079 { 00:15:51.079 "job": "TLSTESTn1", 00:15:51.079 "core_mask": "0x4", 00:15:51.079 "workload": "verify", 00:15:51.079 "status": "finished", 00:15:51.079 "verify_range": { 00:15:51.079 "start": 0, 00:15:51.079 "length": 8192 00:15:51.079 }, 00:15:51.079 "queue_depth": 128, 00:15:51.079 "io_size": 4096, 00:15:51.079 "runtime": 10.018028, 00:15:51.079 "iops": 4106.496807555339, 00:15:51.079 "mibps": 16.041003154513042, 00:15:51.079 "io_failed": 0, 00:15:51.079 "io_timeout": 0, 00:15:51.079 "avg_latency_us": 31108.925984235262, 00:15:51.079 "min_latency_us": 7268.538181818182, 00:15:51.079 "max_latency_us": 28001.745454545453 00:15:51.079 } 00:15:51.079 ], 00:15:51.079 "core_count": 1 00:15:51.079 } 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:51.079 nvmf_trace.0 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84565 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 84565 ']' 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 84565 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84565 00:15:51.079 killing process with pid 84565 00:15:51.079 Received shutdown signal, test time was about 10.000000 seconds 00:15:51.079 00:15:51.079 Latency(us) 00:15:51.079 [2024-11-26T16:21:16.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.079 [2024-11-26T16:21:16.732Z] =================================================================================================================== 00:15:51.079 [2024-11-26T16:21:16.732Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84565' 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 84565 00:15:51.079 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 84565 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:51.339 rmmod nvme_tcp 00:15:51.339 rmmod nvme_fabrics 00:15:51.339 rmmod nvme_keyring 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 84525 ']' 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 84525 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 84525 ']' 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 84525 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84525 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:51.339 killing process with pid 84525 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84525' 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 84525 00:15:51.339 16:21:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 84525 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.599 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.859 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:51.859 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.oHk 00:15:51.859 ************************************ 00:15:51.859 END TEST nvmf_fips 00:15:51.859 ************************************ 00:15:51.859 00:15:51.859 real 0m14.225s 00:15:51.859 user 0m19.621s 00:15:51.859 sys 0m5.502s 00:15:51.859 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.859 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:51.859 16:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:51.859 16:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:51.859 16:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.859 16:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:51.859 ************************************ 00:15:51.859 START TEST nvmf_control_msg_list 00:15:51.859 ************************************ 00:15:51.859 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:51.859 * Looking for test storage... 00:15:51.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:51.859 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:51.859 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:15:51.859 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:52.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.120 --rc genhtml_branch_coverage=1 00:15:52.120 --rc genhtml_function_coverage=1 00:15:52.120 --rc genhtml_legend=1 00:15:52.120 --rc geninfo_all_blocks=1 00:15:52.120 --rc geninfo_unexecuted_blocks=1 00:15:52.120 00:15:52.120 ' 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:52.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.120 --rc genhtml_branch_coverage=1 00:15:52.120 --rc genhtml_function_coverage=1 00:15:52.120 --rc genhtml_legend=1 00:15:52.120 --rc geninfo_all_blocks=1 00:15:52.120 --rc geninfo_unexecuted_blocks=1 00:15:52.120 00:15:52.120 ' 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:52.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.120 --rc genhtml_branch_coverage=1 00:15:52.120 --rc genhtml_function_coverage=1 00:15:52.120 --rc genhtml_legend=1 00:15:52.120 --rc geninfo_all_blocks=1 00:15:52.120 --rc geninfo_unexecuted_blocks=1 00:15:52.120 00:15:52.120 ' 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:52.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.120 --rc genhtml_branch_coverage=1 00:15:52.120 --rc genhtml_function_coverage=1 00:15:52.120 --rc genhtml_legend=1 00:15:52.120 --rc geninfo_all_blocks=1 00:15:52.120 --rc geninfo_unexecuted_blocks=1 00:15:52.120 00:15:52.120 ' 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:52.120 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:52.121 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:52.121 Cannot find device "nvmf_init_br" 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:52.121 Cannot find device "nvmf_init_br2" 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:52.121 Cannot find device "nvmf_tgt_br" 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.121 Cannot find device "nvmf_tgt_br2" 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:52.121 Cannot find device "nvmf_init_br" 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:52.121 Cannot find device "nvmf_init_br2" 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:52.121 Cannot find device "nvmf_tgt_br" 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:52.121 Cannot find device "nvmf_tgt_br2" 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:52.121 Cannot find device "nvmf_br" 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:52.121 Cannot find device "nvmf_init_if" 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:52.121 Cannot find device "nvmf_init_if2" 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:52.121 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:52.381 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:52.381 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:15:52.381 00:15:52.381 --- 10.0.0.3 ping statistics --- 00:15:52.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.381 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:52.381 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:52.381 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:15:52.381 00:15:52.381 --- 10.0.0.4 ping statistics --- 00:15:52.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.381 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:52.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:52.381 00:15:52.381 --- 10.0.0.1 ping statistics --- 00:15:52.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.381 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:52.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:15:52.381 00:15:52.381 --- 10.0.0.2 ping statistics --- 00:15:52.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.381 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=84955 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 84955 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 84955 ']' 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.381 16:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:52.641 [2024-11-26 16:21:18.059237] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:15:52.641 [2024-11-26 16:21:18.059559] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.641 [2024-11-26 16:21:18.212191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.641 [2024-11-26 16:21:18.235854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.641 [2024-11-26 16:21:18.235919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.641 [2024-11-26 16:21:18.235933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.641 [2024-11-26 16:21:18.235943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.641 [2024-11-26 16:21:18.235952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.641 [2024-11-26 16:21:18.236320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.641 [2024-11-26 16:21:18.271190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:52.900 [2024-11-26 16:21:18.386498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:52.900 Malloc0 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:52.900 [2024-11-26 16:21:18.420935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=84974 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=84975 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=84976 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:52.900 16:21:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 84974 00:15:53.160 [2024-11-26 16:21:18.609263] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:53.160 [2024-11-26 16:21:18.619455] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:53.160 [2024-11-26 16:21:18.620106] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:54.096 Initializing NVMe Controllers 00:15:54.096 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:54.096 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:54.096 Initialization complete. Launching workers. 00:15:54.096 ======================================================== 00:15:54.096 Latency(us) 00:15:54.096 Device Information : IOPS MiB/s Average min max 00:15:54.096 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3658.96 14.29 273.01 121.33 837.24 00:15:54.096 ======================================================== 00:15:54.096 Total : 3658.96 14.29 273.01 121.33 837.24 00:15:54.096 00:15:54.096 Initializing NVMe Controllers 00:15:54.096 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:54.096 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:54.096 Initialization complete. Launching workers. 00:15:54.096 ======================================================== 00:15:54.096 Latency(us) 00:15:54.096 Device Information : IOPS MiB/s Average min max 00:15:54.096 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3657.00 14.29 273.09 155.11 521.16 00:15:54.096 ======================================================== 00:15:54.096 Total : 3657.00 14.29 273.09 155.11 521.16 00:15:54.096 00:15:54.096 Initializing NVMe Controllers 00:15:54.096 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:54.096 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:54.096 Initialization complete. Launching workers. 00:15:54.096 ======================================================== 00:15:54.096 Latency(us) 00:15:54.096 Device Information : IOPS MiB/s Average min max 00:15:54.096 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3647.00 14.25 273.80 170.52 812.32 00:15:54.096 ======================================================== 00:15:54.096 Total : 3647.00 14.25 273.80 170.52 812.32 00:15:54.096 00:15:54.096 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 84975 00:15:54.096 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 84976 00:15:54.096 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:54.096 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:54.096 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:54.096 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:54.096 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:54.096 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:54.096 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:54.096 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:54.096 rmmod nvme_tcp 00:15:54.096 rmmod nvme_fabrics 00:15:54.096 rmmod nvme_keyring 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 84955 ']' 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 84955 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 84955 ']' 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 84955 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84955 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84955' 00:15:54.356 killing process with pid 84955 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 84955 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 84955 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:54.356 16:21:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.356 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:54.615 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:54.615 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:54.615 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:54.615 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:54.615 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:54.615 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:54.615 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.615 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.615 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:54.615 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.615 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.615 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.615 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:54.615 00:15:54.615 real 0m2.838s 00:15:54.615 user 0m4.778s 00:15:54.615 sys 0m1.294s 00:15:54.615 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.615 ************************************ 00:15:54.615 END TEST nvmf_control_msg_list 00:15:54.616 ************************************ 00:15:54.616 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:54.616 16:21:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:54.616 16:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.616 16:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.616 16:21:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.616 ************************************ 00:15:54.616 START TEST nvmf_wait_for_buf 00:15:54.616 ************************************ 00:15:54.616 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:54.876 * Looking for test storage... 00:15:54.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:54.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.876 --rc genhtml_branch_coverage=1 00:15:54.876 --rc genhtml_function_coverage=1 00:15:54.876 --rc genhtml_legend=1 00:15:54.876 --rc geninfo_all_blocks=1 00:15:54.876 --rc geninfo_unexecuted_blocks=1 00:15:54.876 00:15:54.876 ' 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:54.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.876 --rc genhtml_branch_coverage=1 00:15:54.876 --rc genhtml_function_coverage=1 00:15:54.876 --rc genhtml_legend=1 00:15:54.876 --rc geninfo_all_blocks=1 00:15:54.876 --rc geninfo_unexecuted_blocks=1 00:15:54.876 00:15:54.876 ' 00:15:54.876 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:54.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.876 --rc genhtml_branch_coverage=1 00:15:54.876 --rc genhtml_function_coverage=1 00:15:54.877 --rc genhtml_legend=1 00:15:54.877 --rc geninfo_all_blocks=1 00:15:54.877 --rc geninfo_unexecuted_blocks=1 00:15:54.877 00:15:54.877 ' 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:54.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.877 --rc genhtml_branch_coverage=1 00:15:54.877 --rc genhtml_function_coverage=1 00:15:54.877 --rc genhtml_legend=1 00:15:54.877 --rc geninfo_all_blocks=1 00:15:54.877 --rc geninfo_unexecuted_blocks=1 00:15:54.877 00:15:54.877 ' 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.877 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.877 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:54.878 Cannot find device "nvmf_init_br" 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:54.878 Cannot find device "nvmf_init_br2" 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:54.878 Cannot find device "nvmf_tgt_br" 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.878 Cannot find device "nvmf_tgt_br2" 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:54.878 Cannot find device "nvmf_init_br" 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:54.878 Cannot find device "nvmf_init_br2" 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:54.878 Cannot find device "nvmf_tgt_br" 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:54.878 Cannot find device "nvmf_tgt_br2" 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:54.878 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:55.137 Cannot find device "nvmf_br" 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:55.137 Cannot find device "nvmf_init_if" 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:55.137 Cannot find device "nvmf_init_if2" 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:55.137 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:55.138 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:55.138 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:55.138 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:15:55.138 00:15:55.138 --- 10.0.0.3 ping statistics --- 00:15:55.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.138 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:55.397 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:55.397 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:55.397 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:15:55.397 00:15:55.397 --- 10.0.0.4 ping statistics --- 00:15:55.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.397 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:55.397 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:55.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:55.397 00:15:55.397 --- 10.0.0.1 ping statistics --- 00:15:55.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.397 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:55.397 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:55.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:15:55.397 00:15:55.397 --- 10.0.0.2 ping statistics --- 00:15:55.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.397 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=85209 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 85209 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 85209 ']' 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.398 16:21:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:55.398 [2024-11-26 16:21:20.892074] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:15:55.398 [2024-11-26 16:21:20.892166] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.398 [2024-11-26 16:21:21.043112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.657 [2024-11-26 16:21:21.066562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.657 [2024-11-26 16:21:21.066623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.657 [2024-11-26 16:21:21.066638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.657 [2024-11-26 16:21:21.066647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.657 [2024-11-26 16:21:21.066656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.657 [2024-11-26 16:21:21.067008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.657 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.657 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:15:55.657 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:55.657 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:55.657 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:55.657 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.657 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:55.657 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:55.658 [2024-11-26 16:21:21.236118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:55.658 Malloc0 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:55.658 [2024-11-26 16:21:21.283490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.658 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:55.918 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.918 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:55.918 [2024-11-26 16:21:21.311606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:55.918 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.918 16:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:55.918 [2024-11-26 16:21:21.508544] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:57.297 Initializing NVMe Controllers 00:15:57.297 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:57.297 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:57.297 Initialization complete. Launching workers. 00:15:57.297 ======================================================== 00:15:57.297 Latency(us) 00:15:57.297 Device Information : IOPS MiB/s Average min max 00:15:57.297 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 508.00 63.50 7929.93 5037.59 11137.42 00:15:57.297 ======================================================== 00:15:57.297 Total : 508.00 63.50 7929.93 5037.59 11137.42 00:15:57.297 00:15:57.297 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:57.297 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:57.297 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.297 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:57.297 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.297 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4826 00:15:57.297 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4826 -eq 0 ]] 00:15:57.297 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:57.297 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:57.297 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:57.297 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:57.297 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:57.297 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:57.297 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:57.297 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:57.297 rmmod nvme_tcp 00:15:57.297 rmmod nvme_fabrics 00:15:57.297 rmmod nvme_keyring 00:15:57.557 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:57.557 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:57.557 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:57.557 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 85209 ']' 00:15:57.557 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 85209 00:15:57.557 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 85209 ']' 00:15:57.557 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 85209 00:15:57.557 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:15:57.557 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.557 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85209 00:15:57.557 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.557 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.557 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85209' 00:15:57.557 killing process with pid 85209 00:15:57.557 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 85209 00:15:57.557 16:21:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 85209 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:57.557 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:57.815 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:57.815 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:57.815 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.815 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.815 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:57.815 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.815 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.815 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.815 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:57.815 00:15:57.815 real 0m3.137s 00:15:57.815 user 0m2.531s 00:15:57.815 sys 0m0.740s 00:15:57.815 ************************************ 00:15:57.815 END TEST nvmf_wait_for_buf 00:15:57.815 ************************************ 00:15:57.815 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.816 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:57.816 16:21:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:15:57.816 16:21:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:57.816 16:21:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:57.816 16:21:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.816 16:21:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:57.816 ************************************ 00:15:57.816 START TEST nvmf_fuzz 00:15:57.816 ************************************ 00:15:57.816 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:58.076 * Looking for test storage... 00:15:58.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:58.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.076 --rc genhtml_branch_coverage=1 00:15:58.076 --rc genhtml_function_coverage=1 00:15:58.076 --rc genhtml_legend=1 00:15:58.076 --rc geninfo_all_blocks=1 00:15:58.076 --rc geninfo_unexecuted_blocks=1 00:15:58.076 00:15:58.076 ' 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:58.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.076 --rc genhtml_branch_coverage=1 00:15:58.076 --rc genhtml_function_coverage=1 00:15:58.076 --rc genhtml_legend=1 00:15:58.076 --rc geninfo_all_blocks=1 00:15:58.076 --rc geninfo_unexecuted_blocks=1 00:15:58.076 00:15:58.076 ' 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:58.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.076 --rc genhtml_branch_coverage=1 00:15:58.076 --rc genhtml_function_coverage=1 00:15:58.076 --rc genhtml_legend=1 00:15:58.076 --rc geninfo_all_blocks=1 00:15:58.076 --rc geninfo_unexecuted_blocks=1 00:15:58.076 00:15:58.076 ' 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:58.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:58.076 --rc genhtml_branch_coverage=1 00:15:58.076 --rc genhtml_function_coverage=1 00:15:58.076 --rc genhtml_legend=1 00:15:58.076 --rc geninfo_all_blocks=1 00:15:58.076 --rc geninfo_unexecuted_blocks=1 00:15:58.076 00:15:58.076 ' 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.076 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:58.077 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:58.077 Cannot find device "nvmf_init_br" 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:58.077 Cannot find device "nvmf_init_br2" 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:58.077 Cannot find device "nvmf_tgt_br" 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.077 Cannot find device "nvmf_tgt_br2" 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:58.077 Cannot find device "nvmf_init_br" 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:58.077 Cannot find device "nvmf_init_br2" 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:15:58.077 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:58.336 Cannot find device "nvmf_tgt_br" 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:58.336 Cannot find device "nvmf_tgt_br2" 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:58.336 Cannot find device "nvmf_br" 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:58.336 Cannot find device "nvmf_init_if" 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:58.336 Cannot find device "nvmf_init_if2" 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:58.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:58.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:58.336 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:58.595 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:58.595 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:58.595 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:58.595 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:58.595 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:58.595 16:21:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:58.595 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:58.595 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:15:58.595 00:15:58.595 --- 10.0.0.3 ping statistics --- 00:15:58.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.595 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:58.595 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:58.595 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:15:58.595 00:15:58.595 --- 10.0.0.4 ping statistics --- 00:15:58.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.595 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:58.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:58.595 00:15:58.595 --- 10.0.0.1 ping statistics --- 00:15:58.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.595 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:58.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:15:58.595 00:15:58.595 --- 10.0.0.2 ping statistics --- 00:15:58.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.595 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=85469 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 85469 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 85469 ']' 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.595 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.596 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.596 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.596 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.854 Malloc0 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:58.854 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.855 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:15:58.855 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:15:59.113 Shutting down the fuzz application 00:15:59.113 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:15:59.372 Shutting down the fuzz application 00:15:59.373 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:59.373 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.373 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:59.373 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.373 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:59.373 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:15:59.373 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:59.373 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:15:59.373 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:59.373 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:15:59.373 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:59.373 16:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:59.373 rmmod nvme_tcp 00:15:59.373 rmmod nvme_fabrics 00:15:59.373 rmmod nvme_keyring 00:15:59.373 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:59.373 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:15:59.373 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:15:59.373 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 85469 ']' 00:15:59.373 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 85469 00:15:59.373 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 85469 ']' 00:15:59.373 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 85469 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85469 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:59.632 killing process with pid 85469 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85469' 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 85469 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 85469 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:59.632 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:59.633 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:59.633 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:59.633 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:59.633 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:59.633 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:59.633 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:59.633 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:59.633 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:59.633 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:15:59.892 00:15:59.892 real 0m2.018s 00:15:59.892 user 0m1.657s 00:15:59.892 sys 0m0.630s 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:59.892 ************************************ 00:15:59.892 END TEST nvmf_fuzz 00:15:59.892 ************************************ 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:59.892 ************************************ 00:15:59.892 START TEST nvmf_multiconnection 00:15:59.892 ************************************ 00:15:59.892 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:16:00.152 * Looking for test storage... 00:16:00.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:00.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.152 --rc genhtml_branch_coverage=1 00:16:00.152 --rc genhtml_function_coverage=1 00:16:00.152 --rc genhtml_legend=1 00:16:00.152 --rc geninfo_all_blocks=1 00:16:00.152 --rc geninfo_unexecuted_blocks=1 00:16:00.152 00:16:00.152 ' 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:00.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.152 --rc genhtml_branch_coverage=1 00:16:00.152 --rc genhtml_function_coverage=1 00:16:00.152 --rc genhtml_legend=1 00:16:00.152 --rc geninfo_all_blocks=1 00:16:00.152 --rc geninfo_unexecuted_blocks=1 00:16:00.152 00:16:00.152 ' 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:00.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.152 --rc genhtml_branch_coverage=1 00:16:00.152 --rc genhtml_function_coverage=1 00:16:00.152 --rc genhtml_legend=1 00:16:00.152 --rc geninfo_all_blocks=1 00:16:00.152 --rc geninfo_unexecuted_blocks=1 00:16:00.152 00:16:00.152 ' 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:00.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.152 --rc genhtml_branch_coverage=1 00:16:00.152 --rc genhtml_function_coverage=1 00:16:00.152 --rc genhtml_legend=1 00:16:00.152 --rc geninfo_all_blocks=1 00:16:00.152 --rc geninfo_unexecuted_blocks=1 00:16:00.152 00:16:00.152 ' 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.152 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:00.153 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:00.153 Cannot find device "nvmf_init_br" 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:00.153 Cannot find device "nvmf_init_br2" 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:00.153 Cannot find device "nvmf_tgt_br" 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:00.153 Cannot find device "nvmf_tgt_br2" 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:00.153 Cannot find device "nvmf_init_br" 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:00.153 Cannot find device "nvmf_init_br2" 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:16:00.153 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:00.153 Cannot find device "nvmf_tgt_br" 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:00.412 Cannot find device "nvmf_tgt_br2" 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:00.412 Cannot find device "nvmf_br" 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:00.412 Cannot find device "nvmf_init_if" 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:00.412 Cannot find device "nvmf_init_if2" 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:00.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:00.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:00.412 16:21:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:00.412 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:00.412 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:00.412 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:00.412 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:00.412 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:00.412 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:00.412 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:00.413 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:00.413 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:00.670 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:00.670 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:00.670 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:00.670 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:00.670 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:00.670 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:00.670 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:00.671 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:00.671 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:16:00.671 00:16:00.671 --- 10.0.0.3 ping statistics --- 00:16:00.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.671 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:00.671 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:00.671 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:16:00.671 00:16:00.671 --- 10.0.0.4 ping statistics --- 00:16:00.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.671 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:00.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:00.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:00.671 00:16:00.671 --- 10.0.0.1 ping statistics --- 00:16:00.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.671 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:00.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:00.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:16:00.671 00:16:00.671 --- 10.0.0.2 ping statistics --- 00:16:00.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.671 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=85702 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 85702 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 85702 ']' 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.671 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:00.671 [2024-11-26 16:21:26.199905] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:16:00.671 [2024-11-26 16:21:26.200499] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.929 [2024-11-26 16:21:26.353848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:00.929 [2024-11-26 16:21:26.380898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.929 [2024-11-26 16:21:26.381189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.929 [2024-11-26 16:21:26.381409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.929 [2024-11-26 16:21:26.381568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.929 [2024-11-26 16:21:26.381612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.929 [2024-11-26 16:21:26.382611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.929 [2024-11-26 16:21:26.382741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.929 [2024-11-26 16:21:26.383458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:00.929 [2024-11-26 16:21:26.383471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.929 [2024-11-26 16:21:26.419216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:00.929 [2024-11-26 16:21:26.515082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:00.929 Malloc1 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.929 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.188 [2024-11-26 16:21:26.581648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.188 Malloc2 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.188 Malloc3 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.188 Malloc4 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.188 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.189 Malloc5 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.189 Malloc6 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.189 Malloc7 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.189 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 Malloc8 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 Malloc9 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 Malloc10 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 Malloc11 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:16:01.449 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.449 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:16:01.449 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.449 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:01.449 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.449 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:16:01.450 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.450 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:01.708 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:16:01.708 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:01.708 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:01.708 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:01.708 16:21:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:03.643 16:21:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:03.643 16:21:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:03.643 16:21:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:16:03.643 16:21:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:03.643 16:21:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:03.643 16:21:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:03.643 16:21:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:03.643 16:21:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:16:03.937 16:21:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:16:03.937 16:21:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:03.937 16:21:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:03.937 16:21:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:03.937 16:21:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:05.845 16:21:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:05.845 16:21:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:05.845 16:21:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:16:05.845 16:21:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:05.845 16:21:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.845 16:21:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:05.845 16:21:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:05.845 16:21:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:16:05.845 16:21:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:16:05.845 16:21:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:05.845 16:21:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:05.845 16:21:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:05.845 16:21:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:08.378 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:08.378 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:08.378 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:16:08.378 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:08.378 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.378 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:08.378 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:08.379 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:16:08.379 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:16:08.379 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:08.379 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.379 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:08.379 16:21:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:10.282 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:10.282 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:10.282 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:16:10.282 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:10.282 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.282 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:10.282 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:10.282 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:16:10.282 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:16:10.282 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:10.282 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:10.282 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:10.282 16:21:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:12.188 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:12.188 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:12.188 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:16:12.188 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:12.188 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:12.188 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:12.189 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:12.189 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:16:12.448 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:16:12.448 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:12.448 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.448 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:12.448 16:21:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:14.349 16:21:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:14.349 16:21:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:14.349 16:21:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:16:14.349 16:21:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:14.349 16:21:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.349 16:21:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:14.349 16:21:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:14.349 16:21:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:16:14.606 16:21:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:16:14.606 16:21:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:14.606 16:21:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:14.606 16:21:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:14.606 16:21:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:16.504 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:16.504 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:16.504 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:16:16.504 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:16.504 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:16.504 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:16.504 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:16.504 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:16:16.762 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:16:16.762 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:16.762 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:16.762 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:16.762 16:21:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:18.665 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:18.665 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:18.665 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:16:18.665 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:18.665 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.665 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:18.665 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:18.665 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:16:18.925 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:16:18.925 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:18.925 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.925 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:18.925 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:20.829 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:20.829 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:20.829 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:16:21.088 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:21.088 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:21.088 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:21.088 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:21.088 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:16:21.088 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:16:21.088 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:21.088 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.088 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:21.088 16:21:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:22.991 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:22.991 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:22.991 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:16:23.249 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:23.249 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:23.249 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:23.250 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:23.250 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:16:23.250 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:16:23.250 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:23.250 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:23.250 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:23.250 16:21:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:25.182 16:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:25.182 16:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:25.182 16:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:16:25.442 16:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:25.442 16:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:25.442 16:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:25.442 16:21:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:16:25.442 [global] 00:16:25.442 thread=1 00:16:25.442 invalidate=1 00:16:25.442 rw=read 00:16:25.442 time_based=1 00:16:25.442 runtime=10 00:16:25.442 ioengine=libaio 00:16:25.442 direct=1 00:16:25.442 bs=262144 00:16:25.442 iodepth=64 00:16:25.442 norandommap=1 00:16:25.442 numjobs=1 00:16:25.442 00:16:25.442 [job0] 00:16:25.442 filename=/dev/nvme0n1 00:16:25.442 [job1] 00:16:25.442 filename=/dev/nvme10n1 00:16:25.442 [job2] 00:16:25.442 filename=/dev/nvme1n1 00:16:25.442 [job3] 00:16:25.442 filename=/dev/nvme2n1 00:16:25.442 [job4] 00:16:25.442 filename=/dev/nvme3n1 00:16:25.442 [job5] 00:16:25.442 filename=/dev/nvme4n1 00:16:25.442 [job6] 00:16:25.442 filename=/dev/nvme5n1 00:16:25.442 [job7] 00:16:25.442 filename=/dev/nvme6n1 00:16:25.442 [job8] 00:16:25.442 filename=/dev/nvme7n1 00:16:25.442 [job9] 00:16:25.442 filename=/dev/nvme8n1 00:16:25.442 [job10] 00:16:25.442 filename=/dev/nvme9n1 00:16:25.442 Could not set queue depth (nvme0n1) 00:16:25.442 Could not set queue depth (nvme10n1) 00:16:25.442 Could not set queue depth (nvme1n1) 00:16:25.442 Could not set queue depth (nvme2n1) 00:16:25.442 Could not set queue depth (nvme3n1) 00:16:25.442 Could not set queue depth (nvme4n1) 00:16:25.442 Could not set queue depth (nvme5n1) 00:16:25.442 Could not set queue depth (nvme6n1) 00:16:25.442 Could not set queue depth (nvme7n1) 00:16:25.442 Could not set queue depth (nvme8n1) 00:16:25.442 Could not set queue depth (nvme9n1) 00:16:25.701 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:25.701 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:25.701 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:25.701 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:25.701 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:25.701 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:25.701 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:25.701 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:25.701 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:25.701 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:25.701 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:25.701 fio-3.35 00:16:25.701 Starting 11 threads 00:16:37.911 00:16:37.911 job0: (groupid=0, jobs=1): err= 0: pid=86149: Tue Nov 26 16:22:01 2024 00:16:37.911 read: IOPS=162, BW=40.6MiB/s (42.6MB/s)(412MiB/10129msec) 00:16:37.911 slat (usec): min=21, max=176534, avg=6072.41, stdev=15689.87 00:16:37.911 clat (msec): min=123, max=631, avg=386.87, stdev=68.37 00:16:37.911 lat (msec): min=139, max=632, avg=392.94, stdev=68.59 00:16:37.911 clat percentiles (msec): 00:16:37.911 | 1.00th=[ 182], 5.00th=[ 284], 10.00th=[ 313], 20.00th=[ 347], 00:16:37.911 | 30.00th=[ 363], 40.00th=[ 376], 50.00th=[ 384], 60.00th=[ 397], 00:16:37.911 | 70.00th=[ 409], 80.00th=[ 430], 90.00th=[ 464], 95.00th=[ 518], 00:16:37.911 | 99.00th=[ 600], 99.50th=[ 609], 99.90th=[ 617], 99.95th=[ 634], 00:16:37.911 | 99.99th=[ 634] 00:16:37.911 bw ( KiB/s): min=20992, max=48640, per=7.20%, avg=40550.25, stdev=5970.03, samples=20 00:16:37.911 iops : min= 82, max= 190, avg=158.25, stdev=23.26, samples=20 00:16:37.911 lat (msec) : 250=2.61%, 500=92.10%, 750=5.29% 00:16:37.911 cpu : usr=0.05%, sys=0.82%, ctx=336, majf=0, minf=4097 00:16:37.911 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:16:37.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.911 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:37.911 issued rwts: total=1646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.911 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:37.911 job1: (groupid=0, jobs=1): err= 0: pid=86150: Tue Nov 26 16:22:01 2024 00:16:37.911 read: IOPS=95, BW=23.8MiB/s (24.9MB/s)(242MiB/10171msec) 00:16:37.911 slat (usec): min=21, max=392020, avg=10360.20, stdev=33089.14 00:16:37.911 clat (msec): min=55, max=971, avg=661.36, stdev=132.67 00:16:37.911 lat (msec): min=55, max=1080, avg=671.72, stdev=133.19 00:16:37.911 clat percentiles (msec): 00:16:37.911 | 1.00th=[ 178], 5.00th=[ 472], 10.00th=[ 523], 20.00th=[ 567], 00:16:37.911 | 30.00th=[ 609], 40.00th=[ 642], 50.00th=[ 676], 60.00th=[ 709], 00:16:37.911 | 70.00th=[ 743], 80.00th=[ 768], 90.00th=[ 802], 95.00th=[ 835], 00:16:37.911 | 99.00th=[ 877], 99.50th=[ 877], 99.90th=[ 969], 99.95th=[ 969], 00:16:37.911 | 99.99th=[ 969] 00:16:37.911 bw ( KiB/s): min= 6656, max=34816, per=4.11%, avg=23146.25, stdev=7335.07, samples=20 00:16:37.911 iops : min= 26, max= 136, avg=90.20, stdev=28.73, samples=20 00:16:37.911 lat (msec) : 100=0.83%, 250=1.34%, 500=4.55%, 750=66.84%, 1000=26.45% 00:16:37.911 cpu : usr=0.07%, sys=0.45%, ctx=178, majf=0, minf=4098 00:16:37.911 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.5% 00:16:37.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.911 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:37.911 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.911 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:37.911 job2: (groupid=0, jobs=1): err= 0: pid=86152: Tue Nov 26 16:22:01 2024 00:16:37.911 read: IOPS=93, BW=23.4MiB/s (24.6MB/s)(239MiB/10172msec) 00:16:37.911 slat (usec): min=23, max=352000, avg=10557.90, stdev=31861.42 00:16:37.911 clat (msec): min=75, max=962, avg=670.67, stdev=139.95 00:16:37.911 lat (msec): min=75, max=982, avg=681.23, stdev=139.68 00:16:37.911 clat percentiles (msec): 00:16:37.911 | 1.00th=[ 80], 5.00th=[ 456], 10.00th=[ 527], 20.00th=[ 592], 00:16:37.911 | 30.00th=[ 625], 40.00th=[ 659], 50.00th=[ 684], 60.00th=[ 718], 00:16:37.911 | 70.00th=[ 743], 80.00th=[ 785], 90.00th=[ 818], 95.00th=[ 844], 00:16:37.911 | 99.00th=[ 911], 99.50th=[ 961], 99.90th=[ 961], 99.95th=[ 961], 00:16:37.911 | 99.99th=[ 961] 00:16:37.911 bw ( KiB/s): min= 8721, max=32768, per=4.05%, avg=22786.05, stdev=7056.49, samples=20 00:16:37.911 iops : min= 34, max= 128, avg=88.80, stdev=27.49, samples=20 00:16:37.911 lat (msec) : 100=1.15%, 250=1.78%, 500=5.35%, 750=62.89%, 1000=28.83% 00:16:37.911 cpu : usr=0.02%, sys=0.47%, ctx=178, majf=0, minf=4097 00:16:37.911 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.4% 00:16:37.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.911 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:37.911 issued rwts: total=954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.911 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:37.911 job3: (groupid=0, jobs=1): err= 0: pid=86153: Tue Nov 26 16:22:01 2024 00:16:37.911 read: IOPS=216, BW=54.1MiB/s (56.7MB/s)(548MiB/10129msec) 00:16:37.911 slat (usec): min=19, max=98029, avg=4525.79, stdev=11585.24 00:16:37.911 clat (msec): min=10, max=489, avg=290.83, stdev=119.11 00:16:37.911 lat (msec): min=10, max=489, avg=295.35, stdev=120.90 00:16:37.911 clat percentiles (msec): 00:16:37.911 | 1.00th=[ 15], 5.00th=[ 72], 10.00th=[ 100], 20.00th=[ 144], 00:16:37.911 | 30.00th=[ 257], 40.00th=[ 326], 50.00th=[ 351], 60.00th=[ 363], 00:16:37.911 | 70.00th=[ 376], 80.00th=[ 384], 90.00th=[ 401], 95.00th=[ 409], 00:16:37.911 | 99.00th=[ 435], 99.50th=[ 451], 99.90th=[ 489], 99.95th=[ 489], 00:16:37.911 | 99.99th=[ 489] 00:16:37.911 bw ( KiB/s): min=39856, max=145920, per=9.68%, avg=54472.95, stdev=29299.17, samples=20 00:16:37.911 iops : min= 155, max= 570, avg=212.60, stdev=114.52, samples=20 00:16:37.911 lat (msec) : 20=2.05%, 50=0.68%, 100=7.44%, 250=19.45%, 500=70.37% 00:16:37.911 cpu : usr=0.19%, sys=0.98%, ctx=448, majf=0, minf=4097 00:16:37.911 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:16:37.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:37.911 issued rwts: total=2190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.911 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:37.911 job4: (groupid=0, jobs=1): err= 0: pid=86154: Tue Nov 26 16:22:01 2024 00:16:37.911 read: IOPS=93, BW=23.4MiB/s (24.6MB/s)(238MiB/10163msec) 00:16:37.911 slat (usec): min=20, max=348444, avg=10524.04, stdev=32159.81 00:16:37.911 clat (msec): min=138, max=964, avg=671.62, stdev=148.71 00:16:37.911 lat (msec): min=160, max=964, avg=682.14, stdev=148.61 00:16:37.911 clat percentiles (msec): 00:16:37.911 | 1.00th=[ 161], 5.00th=[ 380], 10.00th=[ 531], 20.00th=[ 575], 00:16:37.911 | 30.00th=[ 634], 40.00th=[ 659], 50.00th=[ 693], 60.00th=[ 709], 00:16:37.911 | 70.00th=[ 735], 80.00th=[ 776], 90.00th=[ 818], 95.00th=[ 902], 00:16:37.911 | 99.00th=[ 969], 99.50th=[ 969], 99.90th=[ 969], 99.95th=[ 969], 00:16:37.911 | 99.99th=[ 969] 00:16:37.911 bw ( KiB/s): min= 1536, max=32702, per=4.04%, avg=22740.65, stdev=8328.90, samples=20 00:16:37.911 iops : min= 6, max= 127, avg=88.65, stdev=32.42, samples=20 00:16:37.911 lat (msec) : 250=3.26%, 500=4.10%, 750=65.23%, 1000=27.42% 00:16:37.911 cpu : usr=0.05%, sys=0.46%, ctx=167, majf=0, minf=4097 00:16:37.911 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.4% 00:16:37.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.911 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:37.911 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.911 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:37.911 job5: (groupid=0, jobs=1): err= 0: pid=86155: Tue Nov 26 16:22:01 2024 00:16:37.911 read: IOPS=733, BW=183MiB/s (192MB/s)(1842MiB/10051msec) 00:16:37.911 slat (usec): min=23, max=295626, avg=1273.39, stdev=4637.07 00:16:37.911 clat (usec): min=1116, max=815679, avg=85906.16, stdev=69925.14 00:16:37.911 lat (usec): min=1199, max=867073, avg=87179.55, stdev=70764.17 00:16:37.911 clat percentiles (usec): 00:16:37.912 | 1.00th=[ 1811], 5.00th=[ 43254], 10.00th=[ 74974], 20.00th=[ 78119], 00:16:37.912 | 30.00th=[ 80217], 40.00th=[ 81265], 50.00th=[ 82314], 60.00th=[ 82314], 00:16:37.912 | 70.00th=[ 83362], 80.00th=[ 84411], 90.00th=[ 85459], 95.00th=[ 86508], 00:16:37.912 | 99.00th=[583009], 99.50th=[759170], 99.90th=[817890], 99.95th=[817890], 00:16:37.912 | 99.99th=[817890] 00:16:37.912 bw ( KiB/s): min=32256, max=291840, per=33.23%, avg=187065.20, stdev=51552.61, samples=20 00:16:37.912 iops : min= 126, max= 1140, avg=730.50, stdev=201.39, samples=20 00:16:37.912 lat (msec) : 2=1.59%, 4=0.81%, 10=0.77%, 20=0.53%, 50=3.05% 00:16:37.912 lat (msec) : 100=90.93%, 250=0.53%, 500=0.62%, 750=0.65%, 1000=0.50% 00:16:37.912 cpu : usr=0.27%, sys=4.40%, ctx=2209, majf=0, minf=4098 00:16:37.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:37.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:37.912 issued rwts: total=7369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.912 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:37.912 job6: (groupid=0, jobs=1): err= 0: pid=86156: Tue Nov 26 16:22:01 2024 00:16:37.912 read: IOPS=95, BW=23.8MiB/s (25.0MB/s)(242MiB/10167msec) 00:16:37.912 slat (usec): min=24, max=298218, avg=10361.39, stdev=30675.90 00:16:37.912 clat (msec): min=132, max=919, avg=660.78, stdev=138.33 00:16:37.912 lat (msec): min=213, max=919, avg=671.14, stdev=138.49 00:16:37.912 clat percentiles (msec): 00:16:37.912 | 1.00th=[ 228], 5.00th=[ 397], 10.00th=[ 485], 20.00th=[ 558], 00:16:37.912 | 30.00th=[ 584], 40.00th=[ 642], 50.00th=[ 684], 60.00th=[ 726], 00:16:37.912 | 70.00th=[ 760], 80.00th=[ 785], 90.00th=[ 818], 95.00th=[ 852], 00:16:37.912 | 99.00th=[ 911], 99.50th=[ 911], 99.90th=[ 919], 99.95th=[ 919], 00:16:37.912 | 99.99th=[ 919] 00:16:37.912 bw ( KiB/s): min=10730, max=32256, per=4.11%, avg=23146.55, stdev=6605.91, samples=20 00:16:37.912 iops : min= 41, max= 126, avg=90.20, stdev=25.91, samples=20 00:16:37.912 lat (msec) : 250=1.55%, 500=9.09%, 750=57.85%, 1000=31.51% 00:16:37.912 cpu : usr=0.07%, sys=0.48%, ctx=186, majf=0, minf=4097 00:16:37.912 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.5% 00:16:37.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.912 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:37.912 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.912 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:37.912 job7: (groupid=0, jobs=1): err= 0: pid=86157: Tue Nov 26 16:22:01 2024 00:16:37.912 read: IOPS=216, BW=54.2MiB/s (56.8MB/s)(549MiB/10131msec) 00:16:37.912 slat (usec): min=19, max=90599, avg=4551.60, stdev=11844.89 00:16:37.912 clat (msec): min=14, max=475, avg=290.28, stdev=118.62 00:16:37.912 lat (msec): min=15, max=475, avg=294.83, stdev=120.40 00:16:37.912 clat percentiles (msec): 00:16:37.912 | 1.00th=[ 39], 5.00th=[ 66], 10.00th=[ 92], 20.00th=[ 144], 00:16:37.912 | 30.00th=[ 236], 40.00th=[ 330], 50.00th=[ 351], 60.00th=[ 363], 00:16:37.912 | 70.00th=[ 376], 80.00th=[ 384], 90.00th=[ 393], 95.00th=[ 409], 00:16:37.912 | 99.00th=[ 430], 99.50th=[ 443], 99.90th=[ 477], 99.95th=[ 477], 00:16:37.912 | 99.99th=[ 477] 00:16:37.912 bw ( KiB/s): min=38989, max=150016, per=9.70%, avg=54600.15, stdev=29035.17, samples=20 00:16:37.912 iops : min= 152, max= 586, avg=213.10, stdev=113.48, samples=20 00:16:37.912 lat (msec) : 20=0.46%, 50=1.59%, 100=9.38%, 250=19.13%, 500=69.44% 00:16:37.912 cpu : usr=0.13%, sys=0.99%, ctx=416, majf=0, minf=4097 00:16:37.912 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:16:37.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:37.912 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.912 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:37.912 job8: (groupid=0, jobs=1): err= 0: pid=86158: Tue Nov 26 16:22:01 2024 00:16:37.912 read: IOPS=166, BW=41.6MiB/s (43.6MB/s)(421MiB/10132msec) 00:16:37.912 slat (usec): min=22, max=262524, avg=5869.65, stdev=16055.42 00:16:37.912 clat (msec): min=51, max=618, avg=378.55, stdev=78.80 00:16:37.912 lat (msec): min=51, max=627, avg=384.42, stdev=79.60 00:16:37.912 clat percentiles (msec): 00:16:37.912 | 1.00th=[ 78], 5.00th=[ 259], 10.00th=[ 313], 20.00th=[ 347], 00:16:37.912 | 30.00th=[ 363], 40.00th=[ 376], 50.00th=[ 384], 60.00th=[ 397], 00:16:37.912 | 70.00th=[ 409], 80.00th=[ 426], 90.00th=[ 456], 95.00th=[ 498], 00:16:37.912 | 99.00th=[ 550], 99.50th=[ 550], 99.90th=[ 617], 99.95th=[ 617], 00:16:37.912 | 99.99th=[ 617] 00:16:37.912 bw ( KiB/s): min=35840, max=45056, per=7.37%, avg=41480.15, stdev=2728.81, samples=20 00:16:37.912 iops : min= 140, max= 176, avg=161.80, stdev=10.64, samples=20 00:16:37.912 lat (msec) : 100=1.54%, 250=3.33%, 500=90.26%, 750=4.87% 00:16:37.912 cpu : usr=0.06%, sys=0.83%, ctx=350, majf=0, minf=4097 00:16:37.912 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.3% 00:16:37.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.912 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:37.912 issued rwts: total=1684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.912 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:37.912 job9: (groupid=0, jobs=1): err= 0: pid=86159: Tue Nov 26 16:22:01 2024 00:16:37.912 read: IOPS=167, BW=41.8MiB/s (43.9MB/s)(424MiB/10132msec) 00:16:37.912 slat (usec): min=21, max=218866, avg=5898.60, stdev=14987.80 00:16:37.912 clat (msec): min=13, max=537, avg=375.85, stdev=68.32 00:16:37.912 lat (msec): min=13, max=660, avg=381.75, stdev=69.48 00:16:37.912 clat percentiles (msec): 00:16:37.912 | 1.00th=[ 114], 5.00th=[ 192], 10.00th=[ 338], 20.00th=[ 359], 00:16:37.912 | 30.00th=[ 368], 40.00th=[ 376], 50.00th=[ 380], 60.00th=[ 388], 00:16:37.912 | 70.00th=[ 397], 80.00th=[ 409], 90.00th=[ 435], 95.00th=[ 481], 00:16:37.912 | 99.00th=[ 518], 99.50th=[ 527], 99.90th=[ 531], 99.95th=[ 542], 00:16:37.912 | 99.99th=[ 542] 00:16:37.912 bw ( KiB/s): min=34746, max=46499, per=7.42%, avg=41783.20, stdev=3078.16, samples=20 00:16:37.912 iops : min= 135, max= 181, avg=163.00, stdev=12.03, samples=20 00:16:37.912 lat (msec) : 20=0.59%, 250=4.66%, 500=92.45%, 750=2.30% 00:16:37.912 cpu : usr=0.12%, sys=0.76%, ctx=350, majf=0, minf=4098 00:16:37.912 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:16:37.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.912 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:37.912 issued rwts: total=1696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.912 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:37.912 job10: (groupid=0, jobs=1): err= 0: pid=86160: Tue Nov 26 16:22:01 2024 00:16:37.912 read: IOPS=172, BW=43.1MiB/s (45.2MB/s)(437MiB/10120msec) 00:16:37.912 slat (usec): min=20, max=243438, avg=5421.99, stdev=15462.44 00:16:37.912 clat (msec): min=30, max=631, avg=365.16, stdev=81.36 00:16:37.912 lat (msec): min=30, max=704, avg=370.59, stdev=82.17 00:16:37.912 clat percentiles (msec): 00:16:37.912 | 1.00th=[ 64], 5.00th=[ 207], 10.00th=[ 279], 20.00th=[ 338], 00:16:37.912 | 30.00th=[ 351], 40.00th=[ 363], 50.00th=[ 372], 60.00th=[ 380], 00:16:37.912 | 70.00th=[ 393], 80.00th=[ 405], 90.00th=[ 451], 95.00th=[ 481], 00:16:37.912 | 99.00th=[ 592], 99.50th=[ 600], 99.90th=[ 634], 99.95th=[ 634], 00:16:37.912 | 99.99th=[ 634] 00:16:37.912 bw ( KiB/s): min=26624, max=62976, per=7.65%, avg=43054.95, stdev=6286.29, samples=20 00:16:37.912 iops : min= 104, max= 246, avg=168.15, stdev=24.56, samples=20 00:16:37.912 lat (msec) : 50=0.97%, 100=0.46%, 250=6.01%, 500=89.40%, 750=3.15% 00:16:37.912 cpu : usr=0.05%, sys=0.83%, ctx=352, majf=0, minf=4097 00:16:37.912 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:16:37.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.912 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:37.912 issued rwts: total=1746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.912 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:37.912 00:16:37.912 Run status group 0 (all jobs): 00:16:37.912 READ: bw=550MiB/s (576MB/s), 23.4MiB/s-183MiB/s (24.6MB/s-192MB/s), io=5592MiB (5864MB), run=10051-10172msec 00:16:37.912 00:16:37.912 Disk stats (read/write): 00:16:37.912 nvme0n1: ios=3168/0, merge=0/0, ticks=1223279/0, in_queue=1223279, util=97.65% 00:16:37.912 nvme10n1: ios=1809/0, merge=0/0, ticks=1210162/0, in_queue=1210162, util=98.01% 00:16:37.912 nvme1n1: ios=1780/0, merge=0/0, ticks=1209128/0, in_queue=1209128, util=98.10% 00:16:37.912 nvme2n1: ios=4261/0, merge=0/0, ticks=1224384/0, in_queue=1224384, util=98.22% 00:16:37.912 nvme3n1: ios=1783/0, merge=0/0, ticks=1200627/0, in_queue=1200627, util=98.23% 00:16:37.912 nvme4n1: ios=14633/0, merge=0/0, ticks=1228724/0, in_queue=1228724, util=98.47% 00:16:37.912 nvme5n1: ios=1809/0, merge=0/0, ticks=1193717/0, in_queue=1193717, util=98.59% 00:16:37.912 nvme6n1: ios=4272/0, merge=0/0, ticks=1225170/0, in_queue=1225170, util=98.74% 00:16:37.912 nvme7n1: ios=3245/0, merge=0/0, ticks=1222820/0, in_queue=1222820, util=98.90% 00:16:37.912 nvme8n1: ios=3265/0, merge=0/0, ticks=1220633/0, in_queue=1220633, util=99.09% 00:16:37.912 nvme9n1: ios=3365/0, merge=0/0, ticks=1224661/0, in_queue=1224661, util=99.08% 00:16:37.912 16:22:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:16:37.912 [global] 00:16:37.912 thread=1 00:16:37.912 invalidate=1 00:16:37.912 rw=randwrite 00:16:37.912 time_based=1 00:16:37.912 runtime=10 00:16:37.912 ioengine=libaio 00:16:37.912 direct=1 00:16:37.912 bs=262144 00:16:37.912 iodepth=64 00:16:37.912 norandommap=1 00:16:37.912 numjobs=1 00:16:37.912 00:16:37.912 [job0] 00:16:37.912 filename=/dev/nvme0n1 00:16:37.912 [job1] 00:16:37.912 filename=/dev/nvme10n1 00:16:37.912 [job2] 00:16:37.912 filename=/dev/nvme1n1 00:16:37.912 [job3] 00:16:37.912 filename=/dev/nvme2n1 00:16:37.912 [job4] 00:16:37.912 filename=/dev/nvme3n1 00:16:37.912 [job5] 00:16:37.912 filename=/dev/nvme4n1 00:16:37.912 [job6] 00:16:37.912 filename=/dev/nvme5n1 00:16:37.912 [job7] 00:16:37.912 filename=/dev/nvme6n1 00:16:37.912 [job8] 00:16:37.912 filename=/dev/nvme7n1 00:16:37.912 [job9] 00:16:37.912 filename=/dev/nvme8n1 00:16:37.912 [job10] 00:16:37.912 filename=/dev/nvme9n1 00:16:37.913 Could not set queue depth (nvme0n1) 00:16:37.913 Could not set queue depth (nvme10n1) 00:16:37.913 Could not set queue depth (nvme1n1) 00:16:37.913 Could not set queue depth (nvme2n1) 00:16:37.913 Could not set queue depth (nvme3n1) 00:16:37.913 Could not set queue depth (nvme4n1) 00:16:37.913 Could not set queue depth (nvme5n1) 00:16:37.913 Could not set queue depth (nvme6n1) 00:16:37.913 Could not set queue depth (nvme7n1) 00:16:37.913 Could not set queue depth (nvme8n1) 00:16:37.913 Could not set queue depth (nvme9n1) 00:16:37.913 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:37.913 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:37.913 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:37.913 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:37.913 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:37.913 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:37.913 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:37.913 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:37.913 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:37.913 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:37.913 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:37.913 fio-3.35 00:16:37.913 Starting 11 threads 00:16:47.899 00:16:47.899 job0: (groupid=0, jobs=1): err= 0: pid=86361: Tue Nov 26 16:22:12 2024 00:16:47.899 write: IOPS=427, BW=107MiB/s (112MB/s)(1084MiB/10132msec); 0 zone resets 00:16:47.899 slat (usec): min=17, max=31516, avg=2240.87, stdev=4060.39 00:16:47.899 clat (msec): min=24, max=296, avg=147.32, stdev=35.65 00:16:47.899 lat (msec): min=24, max=296, avg=149.56, stdev=36.03 00:16:47.899 clat percentiles (msec): 00:16:47.899 | 1.00th=[ 37], 5.00th=[ 104], 10.00th=[ 109], 20.00th=[ 115], 00:16:47.899 | 30.00th=[ 120], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:16:47.899 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 169], 95.00th=[ 171], 00:16:47.899 | 99.00th=[ 251], 99.50th=[ 275], 99.90th=[ 288], 99.95th=[ 292], 00:16:47.899 | 99.99th=[ 296] 00:16:47.899 bw ( KiB/s): min=72336, max=174080, per=17.97%, avg=109335.15, stdev=24077.94, samples=20 00:16:47.899 iops : min= 282, max= 680, avg=427.05, stdev=94.11, samples=20 00:16:47.899 lat (msec) : 50=2.33%, 100=2.56%, 250=94.09%, 500=1.02% 00:16:47.899 cpu : usr=0.77%, sys=1.22%, ctx=5403, majf=0, minf=1 00:16:47.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:16:47.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:47.899 issued rwts: total=0,4334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.899 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:47.899 job1: (groupid=0, jobs=1): err= 0: pid=86362: Tue Nov 26 16:22:12 2024 00:16:47.899 write: IOPS=122, BW=30.6MiB/s (32.1MB/s)(316MiB/10334msec); 0 zone resets 00:16:47.899 slat (usec): min=21, max=84343, avg=7926.61, stdev=15129.32 00:16:47.899 clat (msec): min=10, max=879, avg=514.97, stdev=123.43 00:16:47.899 lat (msec): min=10, max=879, avg=522.90, stdev=124.61 00:16:47.899 clat percentiles (msec): 00:16:47.899 | 1.00th=[ 93], 5.00th=[ 266], 10.00th=[ 342], 20.00th=[ 447], 00:16:47.899 | 30.00th=[ 464], 40.00th=[ 527], 50.00th=[ 567], 60.00th=[ 575], 00:16:47.899 | 70.00th=[ 600], 80.00th=[ 609], 90.00th=[ 609], 95.00th=[ 617], 00:16:47.899 | 99.00th=[ 776], 99.50th=[ 810], 99.90th=[ 877], 99.95th=[ 877], 00:16:47.899 | 99.99th=[ 877] 00:16:47.899 bw ( KiB/s): min=26570, max=53248, per=5.05%, avg=30717.30, stdev=6409.64, samples=20 00:16:47.899 iops : min= 103, max= 208, avg=119.95, stdev=25.07, samples=20 00:16:47.899 lat (msec) : 20=0.32%, 50=0.32%, 100=0.63%, 250=2.53%, 500=33.78% 00:16:47.899 lat (msec) : 750=61.31%, 1000=1.11% 00:16:47.899 cpu : usr=0.33%, sys=0.31%, ctx=1160, majf=0, minf=1 00:16:47.899 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:16:47.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.899 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:47.899 issued rwts: total=0,1264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.899 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:47.899 job2: (groupid=0, jobs=1): err= 0: pid=86374: Tue Nov 26 16:22:12 2024 00:16:47.899 write: IOPS=216, BW=54.2MiB/s (56.9MB/s)(560MiB/10333msec); 0 zone resets 00:16:47.899 slat (usec): min=17, max=75904, avg=4305.34, stdev=10205.58 00:16:47.899 clat (msec): min=8, max=884, avg=290.49, stdev=208.99 00:16:47.899 lat (msec): min=8, max=884, avg=294.79, stdev=211.98 00:16:47.899 clat percentiles (msec): 00:16:47.899 | 1.00th=[ 41], 5.00th=[ 52], 10.00th=[ 55], 20.00th=[ 63], 00:16:47.899 | 30.00th=[ 165], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 239], 00:16:47.899 | 70.00th=[ 527], 80.00th=[ 550], 90.00th=[ 558], 95.00th=[ 567], 00:16:47.899 | 99.00th=[ 676], 99.50th=[ 776], 99.90th=[ 852], 99.95th=[ 885], 00:16:47.899 | 99.99th=[ 885] 00:16:47.899 bw ( KiB/s): min=28672, max=261120, per=9.16%, avg=55731.20, stdev=55003.11, samples=20 00:16:47.899 iops : min= 112, max= 1020, avg=217.70, stdev=214.86, samples=20 00:16:47.899 lat (msec) : 10=0.36%, 20=0.22%, 50=3.08%, 100=19.14%, 250=38.02% 00:16:47.899 lat (msec) : 500=3.93%, 750=34.63%, 1000=0.62% 00:16:47.899 cpu : usr=0.22%, sys=0.79%, ctx=2830, majf=0, minf=1 00:16:47.899 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:16:47.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:47.899 issued rwts: total=0,2241,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.899 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:47.899 job3: (groupid=0, jobs=1): err= 0: pid=86375: Tue Nov 26 16:22:12 2024 00:16:47.899 write: IOPS=122, BW=30.7MiB/s (32.2MB/s)(317MiB/10320msec); 0 zone resets 00:16:47.899 slat (usec): min=22, max=185068, avg=7915.61, stdev=15267.80 00:16:47.899 clat (msec): min=187, max=862, avg=513.45, stdev=87.37 00:16:47.899 lat (msec): min=188, max=862, avg=521.36, stdev=87.67 00:16:47.899 clat percentiles (msec): 00:16:47.899 | 1.00th=[ 236], 5.00th=[ 368], 10.00th=[ 409], 20.00th=[ 447], 00:16:47.899 | 30.00th=[ 460], 40.00th=[ 523], 50.00th=[ 542], 60.00th=[ 558], 00:16:47.899 | 70.00th=[ 567], 80.00th=[ 567], 90.00th=[ 592], 95.00th=[ 617], 00:16:47.899 | 99.00th=[ 760], 99.50th=[ 793], 99.90th=[ 860], 99.95th=[ 860], 00:16:47.899 | 99.99th=[ 860] 00:16:47.899 bw ( KiB/s): min=24576, max=36864, per=5.06%, avg=30768.30, stdev=3556.45, samples=20 00:16:47.899 iops : min= 96, max= 144, avg=120.15, stdev=13.92, samples=20 00:16:47.899 lat (msec) : 250=1.42%, 500=33.73%, 750=63.74%, 1000=1.11% 00:16:47.899 cpu : usr=0.31%, sys=0.35%, ctx=1020, majf=0, minf=1 00:16:47.899 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:16:47.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.899 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:47.899 issued rwts: total=0,1266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.899 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:47.899 job4: (groupid=0, jobs=1): err= 0: pid=86376: Tue Nov 26 16:22:12 2024 00:16:47.899 write: IOPS=342, BW=85.6MiB/s (89.8MB/s)(868MiB/10136msec); 0 zone resets 00:16:47.899 slat (usec): min=17, max=56362, avg=2695.53, stdev=5622.87 00:16:47.899 clat (msec): min=15, max=539, avg=184.07, stdev=86.93 00:16:47.899 lat (msec): min=15, max=548, avg=186.76, stdev=88.09 00:16:47.899 clat percentiles (msec): 00:16:47.899 | 1.00th=[ 43], 5.00th=[ 85], 10.00th=[ 127], 20.00th=[ 157], 00:16:47.899 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 167], 00:16:47.899 | 70.00th=[ 169], 80.00th=[ 171], 90.00th=[ 351], 95.00th=[ 422], 00:16:47.899 | 99.00th=[ 456], 99.50th=[ 510], 99.90th=[ 535], 99.95th=[ 542], 00:16:47.899 | 99.99th=[ 542] 00:16:47.899 bw ( KiB/s): min=36864, max=124928, per=14.35%, avg=87260.75, stdev=25933.55, samples=20 00:16:47.899 iops : min= 144, max= 488, avg=340.85, stdev=101.30, samples=20 00:16:47.899 lat (msec) : 20=0.12%, 50=2.25%, 100=3.54%, 250=82.78%, 500=10.74% 00:16:47.899 lat (msec) : 750=0.58% 00:16:47.899 cpu : usr=0.63%, sys=1.03%, ctx=2962, majf=0, minf=1 00:16:47.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:16:47.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:47.899 issued rwts: total=0,3472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.899 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:47.899 job5: (groupid=0, jobs=1): err= 0: pid=86377: Tue Nov 26 16:22:12 2024 00:16:47.899 write: IOPS=126, BW=31.5MiB/s (33.1MB/s)(326MiB/10322msec); 0 zone resets 00:16:47.899 slat (usec): min=17, max=109367, avg=7620.13, stdev=14311.78 00:16:47.899 clat (msec): min=48, max=902, avg=499.43, stdev=105.61 00:16:47.899 lat (msec): min=48, max=902, avg=507.05, stdev=106.60 00:16:47.899 clat percentiles (msec): 00:16:47.899 | 1.00th=[ 108], 5.00th=[ 300], 10.00th=[ 368], 20.00th=[ 451], 00:16:47.899 | 30.00th=[ 468], 40.00th=[ 514], 50.00th=[ 531], 60.00th=[ 542], 00:16:47.899 | 70.00th=[ 558], 80.00th=[ 567], 90.00th=[ 567], 95.00th=[ 584], 00:16:47.899 | 99.00th=[ 785], 99.50th=[ 818], 99.90th=[ 902], 99.95th=[ 902], 00:16:47.899 | 99.99th=[ 902] 00:16:47.899 bw ( KiB/s): min=27136, max=47616, per=5.21%, avg=31714.85, stdev=4889.83, samples=20 00:16:47.899 iops : min= 106, max= 186, avg=123.85, stdev=19.07, samples=20 00:16:47.899 lat (msec) : 50=0.31%, 100=0.61%, 250=2.84%, 500=33.26%, 750=61.90% 00:16:47.899 lat (msec) : 1000=1.08% 00:16:47.899 cpu : usr=0.27%, sys=0.36%, ctx=1168, majf=0, minf=1 00:16:47.899 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.2% 00:16:47.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.899 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:47.899 issued rwts: total=0,1302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.899 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:47.899 job6: (groupid=0, jobs=1): err= 0: pid=86378: Tue Nov 26 16:22:12 2024 00:16:47.899 write: IOPS=179, BW=44.9MiB/s (47.1MB/s)(464MiB/10333msec); 0 zone resets 00:16:47.899 slat (usec): min=17, max=101723, avg=5336.93, stdev=11349.21 00:16:47.899 clat (msec): min=9, max=870, avg=350.65, stdev=185.57 00:16:47.899 lat (msec): min=9, max=870, avg=355.99, stdev=188.08 00:16:47.899 clat percentiles (msec): 00:16:47.899 | 1.00th=[ 52], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 176], 00:16:47.899 | 30.00th=[ 180], 40.00th=[ 192], 50.00th=[ 257], 60.00th=[ 518], 00:16:47.899 | 70.00th=[ 542], 80.00th=[ 558], 90.00th=[ 567], 95.00th=[ 567], 00:16:47.899 | 99.00th=[ 693], 99.50th=[ 802], 99.90th=[ 869], 99.95th=[ 869], 00:16:47.899 | 99.99th=[ 869] 00:16:47.899 bw ( KiB/s): min=26624, max=94208, per=7.54%, avg=45878.85, stdev=26630.25, samples=20 00:16:47.899 iops : min= 104, max= 368, avg=179.15, stdev=104.03, samples=20 00:16:47.899 lat (msec) : 10=0.22%, 20=0.27%, 50=0.43%, 100=0.65%, 250=47.63% 00:16:47.899 lat (msec) : 500=8.03%, 750=42.03%, 1000=0.75% 00:16:47.899 cpu : usr=0.25%, sys=0.61%, ctx=2408, majf=0, minf=1 00:16:47.899 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:16:47.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.899 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:47.899 issued rwts: total=0,1856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.899 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:47.899 job7: (groupid=0, jobs=1): err= 0: pid=86379: Tue Nov 26 16:22:12 2024 00:16:47.899 write: IOPS=427, BW=107MiB/s (112MB/s)(1084MiB/10134msec); 0 zone resets 00:16:47.899 slat (usec): min=18, max=101838, avg=2268.63, stdev=4252.64 00:16:47.899 clat (msec): min=83, max=294, avg=147.25, stdev=26.35 00:16:47.899 lat (msec): min=89, max=294, avg=149.52, stdev=26.45 00:16:47.899 clat percentiles (msec): 00:16:47.899 | 1.00th=[ 96], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 116], 00:16:47.900 | 30.00th=[ 123], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 165], 00:16:47.900 | 70.00th=[ 167], 80.00th=[ 167], 90.00th=[ 169], 95.00th=[ 171], 00:16:47.900 | 99.00th=[ 205], 99.50th=[ 243], 99.90th=[ 284], 99.95th=[ 284], 00:16:47.900 | 99.99th=[ 296] 00:16:47.900 bw ( KiB/s): min=96063, max=151552, per=17.98%, avg=109388.75, stdev=19645.46, samples=20 00:16:47.900 iops : min= 375, max= 592, avg=427.25, stdev=76.78, samples=20 00:16:47.900 lat (msec) : 100=1.41%, 250=98.18%, 500=0.42% 00:16:47.900 cpu : usr=0.70%, sys=1.43%, ctx=5245, majf=0, minf=1 00:16:47.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:16:47.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:47.900 issued rwts: total=0,4336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.900 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:47.900 job8: (groupid=0, jobs=1): err= 0: pid=86380: Tue Nov 26 16:22:12 2024 00:16:47.900 write: IOPS=125, BW=31.4MiB/s (32.9MB/s)(324MiB/10318msec); 0 zone resets 00:16:47.900 slat (usec): min=21, max=108083, avg=7471.64, stdev=14315.70 00:16:47.900 clat (msec): min=48, max=900, avg=502.13, stdev=112.32 00:16:47.900 lat (msec): min=48, max=900, avg=509.60, stdev=113.50 00:16:47.900 clat percentiles (msec): 00:16:47.900 | 1.00th=[ 99], 5.00th=[ 279], 10.00th=[ 363], 20.00th=[ 435], 00:16:47.900 | 30.00th=[ 464], 40.00th=[ 518], 50.00th=[ 542], 60.00th=[ 558], 00:16:47.900 | 70.00th=[ 567], 80.00th=[ 575], 90.00th=[ 584], 95.00th=[ 617], 00:16:47.900 | 99.00th=[ 793], 99.50th=[ 827], 99.90th=[ 902], 99.95th=[ 902], 00:16:47.900 | 99.99th=[ 902] 00:16:47.900 bw ( KiB/s): min=25088, max=48031, per=5.18%, avg=31534.35, stdev=5497.57, samples=20 00:16:47.900 iops : min= 98, max= 187, avg=123.15, stdev=21.38, samples=20 00:16:47.900 lat (msec) : 50=0.23%, 100=0.77%, 250=2.70%, 500=33.67%, 750=61.24% 00:16:47.900 lat (msec) : 1000=1.39% 00:16:47.900 cpu : usr=0.23%, sys=0.46%, ctx=1571, majf=0, minf=1 00:16:47.900 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:16:47.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.900 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:47.900 issued rwts: total=0,1295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.900 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:47.900 job9: (groupid=0, jobs=1): err= 0: pid=86381: Tue Nov 26 16:22:12 2024 00:16:47.900 write: IOPS=124, BW=31.1MiB/s (32.6MB/s)(321MiB/10318msec); 0 zone resets 00:16:47.900 slat (usec): min=16, max=155595, avg=7677.95, stdev=14944.03 00:16:47.900 clat (msec): min=111, max=889, avg=507.07, stdev=102.28 00:16:47.900 lat (msec): min=111, max=889, avg=514.75, stdev=103.23 00:16:47.900 clat percentiles (msec): 00:16:47.900 | 1.00th=[ 167], 5.00th=[ 313], 10.00th=[ 372], 20.00th=[ 447], 00:16:47.900 | 30.00th=[ 477], 40.00th=[ 518], 50.00th=[ 535], 60.00th=[ 550], 00:16:47.900 | 70.00th=[ 558], 80.00th=[ 567], 90.00th=[ 575], 95.00th=[ 659], 00:16:47.900 | 99.00th=[ 785], 99.50th=[ 818], 99.90th=[ 885], 99.95th=[ 885], 00:16:47.900 | 99.99th=[ 885] 00:16:47.900 bw ( KiB/s): min=22528, max=41472, per=5.13%, avg=31203.50, stdev=4558.46, samples=20 00:16:47.900 iops : min= 88, max= 162, avg=121.85, stdev=17.83, samples=20 00:16:47.900 lat (msec) : 250=2.73%, 500=33.15%, 750=63.03%, 1000=1.09% 00:16:47.900 cpu : usr=0.22%, sys=0.41%, ctx=1315, majf=0, minf=1 00:16:47.900 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:16:47.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.900 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:47.900 issued rwts: total=0,1282,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.900 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:47.900 job10: (groupid=0, jobs=1): err= 0: pid=86382: Tue Nov 26 16:22:12 2024 00:16:47.900 write: IOPS=184, BW=46.1MiB/s (48.4MB/s)(477MiB/10327msec); 0 zone resets 00:16:47.900 slat (usec): min=18, max=66653, avg=5133.37, stdev=10927.98 00:16:47.900 clat (msec): min=5, max=883, avg=341.40, stdev=190.16 00:16:47.900 lat (msec): min=6, max=883, avg=346.53, stdev=192.81 00:16:47.900 clat percentiles (msec): 00:16:47.900 | 1.00th=[ 27], 5.00th=[ 136], 10.00th=[ 165], 20.00th=[ 174], 00:16:47.900 | 30.00th=[ 178], 40.00th=[ 190], 50.00th=[ 228], 60.00th=[ 510], 00:16:47.900 | 70.00th=[ 535], 80.00th=[ 558], 90.00th=[ 567], 95.00th=[ 567], 00:16:47.900 | 99.00th=[ 709], 99.50th=[ 810], 99.90th=[ 885], 99.95th=[ 885], 00:16:47.900 | 99.99th=[ 885] 00:16:47.900 bw ( KiB/s): min=28672, max=94208, per=7.75%, avg=47152.10, stdev=28103.25, samples=20 00:16:47.900 iops : min= 112, max= 368, avg=184.15, stdev=109.80, samples=20 00:16:47.900 lat (msec) : 10=0.21%, 20=0.58%, 50=0.89%, 100=1.73%, 250=49.16% 00:16:47.900 lat (msec) : 500=5.98%, 750=40.71%, 1000=0.73% 00:16:47.900 cpu : usr=0.32%, sys=0.58%, ctx=3046, majf=0, minf=1 00:16:47.900 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:16:47.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.900 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:47.900 issued rwts: total=0,1906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.900 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:47.900 00:16:47.900 Run status group 0 (all jobs): 00:16:47.900 WRITE: bw=594MiB/s (623MB/s), 30.6MiB/s-107MiB/s (32.1MB/s-112MB/s), io=6139MiB (6437MB), run=10132-10334msec 00:16:47.900 00:16:47.900 Disk stats (read/write): 00:16:47.900 nvme0n1: ios=49/8510, merge=0/0, ticks=59/1209399, in_queue=1209458, util=97.61% 00:16:47.900 nvme10n1: ios=49/2464, merge=0/0, ticks=36/1222266, in_queue=1222302, util=97.87% 00:16:47.900 nvme1n1: ios=43/4422, merge=0/0, ticks=41/1225417, in_queue=1225458, util=98.07% 00:16:47.900 nvme2n1: ios=27/2466, merge=0/0, ticks=59/1222105, in_queue=1222164, util=97.92% 00:16:47.900 nvme3n1: ios=21/6784, merge=0/0, ticks=41/1211461, in_queue=1211502, util=97.88% 00:16:47.900 nvme4n1: ios=0/2544, merge=0/0, ticks=0/1222889, in_queue=1222889, util=98.19% 00:16:47.900 nvme5n1: ios=0/3648, merge=0/0, ticks=0/1224270, in_queue=1224270, util=98.37% 00:16:47.900 nvme6n1: ios=0/8513, merge=0/0, ticks=0/1208478, in_queue=1208478, util=98.31% 00:16:47.900 nvme7n1: ios=0/2531, merge=0/0, ticks=0/1222803, in_queue=1222803, util=98.65% 00:16:47.900 nvme8n1: ios=0/2504, merge=0/0, ticks=0/1221986, in_queue=1221986, util=98.74% 00:16:47.900 nvme9n1: ios=0/3750, merge=0/0, ticks=0/1224629, in_queue=1224629, util=98.90% 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:47.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:47.900 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:47.900 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:47.900 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.901 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:47.901 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.901 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:47.901 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:47.901 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:47.901 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:16:47.901 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:47.901 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:47.901 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:16:47.901 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:47.901 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:16:47.901 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:47.901 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:47.901 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.901 16:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:47.901 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:16:47.901 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:16:47.901 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:16:47.901 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:16:47.901 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:16:47.901 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:47.901 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:16:48.161 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:16:48.161 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:16:48.161 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:48.161 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:48.161 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:16:48.161 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:16:48.161 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:48.161 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:48.162 rmmod nvme_tcp 00:16:48.162 rmmod nvme_fabrics 00:16:48.162 rmmod nvme_keyring 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 85702 ']' 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 85702 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 85702 ']' 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 85702 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85702 00:16:48.162 killing process with pid 85702 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85702' 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 85702 00:16:48.162 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 85702 00:16:48.421 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:48.421 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:48.421 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:48.421 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:16:48.421 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:16:48.421 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:48.421 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:16:48.421 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:48.421 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:48.421 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:48.421 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:48.421 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:48.422 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:48.422 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:16:48.681 00:16:48.681 real 0m48.753s 00:16:48.681 user 2m48.976s 00:16:48.681 sys 0m23.904s 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.681 ************************************ 00:16:48.681 END TEST nvmf_multiconnection 00:16:48.681 ************************************ 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:48.681 ************************************ 00:16:48.681 START TEST nvmf_initiator_timeout 00:16:48.681 ************************************ 00:16:48.681 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:48.941 * Looking for test storage... 00:16:48.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:48.941 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:48.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.942 --rc genhtml_branch_coverage=1 00:16:48.942 --rc genhtml_function_coverage=1 00:16:48.942 --rc genhtml_legend=1 00:16:48.942 --rc geninfo_all_blocks=1 00:16:48.942 --rc geninfo_unexecuted_blocks=1 00:16:48.942 00:16:48.942 ' 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:48.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.942 --rc genhtml_branch_coverage=1 00:16:48.942 --rc genhtml_function_coverage=1 00:16:48.942 --rc genhtml_legend=1 00:16:48.942 --rc geninfo_all_blocks=1 00:16:48.942 --rc geninfo_unexecuted_blocks=1 00:16:48.942 00:16:48.942 ' 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:48.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.942 --rc genhtml_branch_coverage=1 00:16:48.942 --rc genhtml_function_coverage=1 00:16:48.942 --rc genhtml_legend=1 00:16:48.942 --rc geninfo_all_blocks=1 00:16:48.942 --rc geninfo_unexecuted_blocks=1 00:16:48.942 00:16:48.942 ' 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:48.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.942 --rc genhtml_branch_coverage=1 00:16:48.942 --rc genhtml_function_coverage=1 00:16:48.942 --rc genhtml_legend=1 00:16:48.942 --rc geninfo_all_blocks=1 00:16:48.942 --rc geninfo_unexecuted_blocks=1 00:16:48.942 00:16:48.942 ' 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.942 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:48.943 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:48.943 Cannot find device "nvmf_init_br" 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:48.943 Cannot find device "nvmf_init_br2" 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:48.943 Cannot find device "nvmf_tgt_br" 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:48.943 Cannot find device "nvmf_tgt_br2" 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:48.943 Cannot find device "nvmf_init_br" 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:48.943 Cannot find device "nvmf_init_br2" 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:48.943 Cannot find device "nvmf_tgt_br" 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:48.943 Cannot find device "nvmf_tgt_br2" 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:16:48.943 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:49.202 Cannot find device "nvmf_br" 00:16:49.202 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:16:49.202 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:49.202 Cannot find device "nvmf_init_if" 00:16:49.202 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:16:49.202 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:49.202 Cannot find device "nvmf_init_if2" 00:16:49.202 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:16:49.202 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:49.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:49.202 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:16:49.202 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:49.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:49.202 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:16:49.202 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:49.202 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:49.202 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:49.202 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:49.203 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:49.203 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:16:49.203 00:16:49.203 --- 10.0.0.3 ping statistics --- 00:16:49.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.203 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:49.203 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:49.203 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:16:49.203 00:16:49.203 --- 10.0.0.4 ping statistics --- 00:16:49.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.203 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:49.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:49.203 00:16:49.203 --- 10.0.0.1 ping statistics --- 00:16:49.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.203 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:49.203 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:49.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:16:49.462 00:16:49.462 --- 10.0.0.2 ping statistics --- 00:16:49.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.463 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=86803 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 86803 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 86803 ']' 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.463 16:22:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:49.463 [2024-11-26 16:22:14.938397] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:16:49.463 [2024-11-26 16:22:14.938504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.463 [2024-11-26 16:22:15.086720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:49.463 [2024-11-26 16:22:15.106344] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.463 [2024-11-26 16:22:15.106438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.463 [2024-11-26 16:22:15.106449] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.463 [2024-11-26 16:22:15.106456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.463 [2024-11-26 16:22:15.106462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.463 [2024-11-26 16:22:15.107286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.463 [2024-11-26 16:22:15.107808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.463 [2024-11-26 16:22:15.108079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.463 [2024-11-26 16:22:15.108085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.723 [2024-11-26 16:22:15.137581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:49.723 Malloc0 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:49.723 Delay0 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:49.723 [2024-11-26 16:22:15.279552] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:49.723 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.724 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:49.724 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.724 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:49.724 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.724 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:49.724 [2024-11-26 16:22:15.312380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:49.724 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.724 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:49.983 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:16:49.983 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:16:49.983 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:49.983 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:49.983 16:22:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:16:51.886 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:51.886 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:51.886 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:51.886 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:51.886 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:51.886 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:16:51.886 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=86860 00:16:51.886 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:16:51.886 16:22:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:16:51.886 [global] 00:16:51.886 thread=1 00:16:51.886 invalidate=1 00:16:51.886 rw=write 00:16:51.886 time_based=1 00:16:51.886 runtime=60 00:16:51.886 ioengine=libaio 00:16:51.886 direct=1 00:16:51.886 bs=4096 00:16:51.886 iodepth=1 00:16:51.886 norandommap=0 00:16:51.886 numjobs=1 00:16:51.886 00:16:51.886 verify_dump=1 00:16:51.886 verify_backlog=512 00:16:51.886 verify_state_save=0 00:16:51.886 do_verify=1 00:16:51.886 verify=crc32c-intel 00:16:51.887 [job0] 00:16:51.887 filename=/dev/nvme0n1 00:16:51.887 Could not set queue depth (nvme0n1) 00:16:52.145 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.145 fio-3.35 00:16:52.145 Starting 1 thread 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:55.552 true 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:55.552 true 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:55.552 true 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:55.552 true 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.552 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:16:58.086 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:58.087 true 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:58.087 true 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:58.087 true 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:58.087 true 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:16:58.087 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 86860 00:17:54.312 00:17:54.312 job0: (groupid=0, jobs=1): err= 0: pid=86881: Tue Nov 26 16:23:17 2024 00:17:54.312 read: IOPS=863, BW=3456KiB/s (3538kB/s)(202MiB/60000msec) 00:17:54.312 slat (nsec): min=9914, max=66829, avg=12235.84, stdev=3362.23 00:17:54.312 clat (usec): min=148, max=741, avg=193.00, stdev=18.88 00:17:54.312 lat (usec): min=163, max=756, avg=205.23, stdev=19.55 00:17:54.312 clat percentiles (usec): 00:17:54.312 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:17:54.312 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:17:54.312 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 229], 00:17:54.312 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 277], 99.95th=[ 289], 00:17:54.312 | 99.99th=[ 457] 00:17:54.312 write: IOPS=870, BW=3482KiB/s (3565kB/s)(204MiB/60000msec); 0 zone resets 00:17:54.312 slat (usec): min=12, max=14847, avg=19.18, stdev=73.69 00:17:54.312 clat (usec): min=5, max=40549k, avg=923.29, stdev=177436.69 00:17:54.312 lat (usec): min=129, max=40549k, avg=942.47, stdev=177436.71 00:17:54.312 clat percentiles (usec): 00:17:54.312 | 1.00th=[ 119], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 131], 00:17:54.312 | 30.00th=[ 135], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 149], 00:17:54.312 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 180], 00:17:54.312 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 310], 99.95th=[ 545], 00:17:54.312 | 99.99th=[ 2638] 00:17:54.312 bw ( KiB/s): min= 3160, max=12288, per=100.00%, avg=10443.28, stdev=1876.20, samples=39 00:17:54.312 iops : min= 790, max= 3072, avg=2610.82, stdev=469.05, samples=39 00:17:54.312 lat (usec) : 10=0.01%, 250=99.55%, 500=0.42%, 750=0.02%, 1000=0.01% 00:17:54.312 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:17:54.312 cpu : usr=0.53%, sys=2.12%, ctx=104075, majf=0, minf=5 00:17:54.312 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.312 issued rwts: total=51833,52224,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.312 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:54.312 00:17:54.312 Run status group 0 (all jobs): 00:17:54.312 READ: bw=3456KiB/s (3538kB/s), 3456KiB/s-3456KiB/s (3538kB/s-3538kB/s), io=202MiB (212MB), run=60000-60000msec 00:17:54.312 WRITE: bw=3482KiB/s (3565kB/s), 3482KiB/s-3482KiB/s (3565kB/s-3565kB/s), io=204MiB (214MB), run=60000-60000msec 00:17:54.312 00:17:54.312 Disk stats (read/write): 00:17:54.312 nvme0n1: ios=51987/51775, merge=0/0, ticks=10325/7970, in_queue=18295, util=99.61% 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:54.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:17:54.312 nvmf hotplug test: fio successful as expected 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:54.312 rmmod nvme_tcp 00:17:54.312 rmmod nvme_fabrics 00:17:54.312 rmmod nvme_keyring 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 86803 ']' 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 86803 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 86803 ']' 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 86803 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86803 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.312 killing process with pid 86803 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86803' 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 86803 00:17:54.312 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 86803 00:17:54.312 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:54.312 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:54.312 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:54.312 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:17:54.312 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:17:54.312 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:54.312 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:17:54.312 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:54.312 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:54.312 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:54.312 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:54.312 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:54.312 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:54.312 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:54.312 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:54.312 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:17:54.313 00:17:54.313 real 1m4.054s 00:17:54.313 user 3m50.833s 00:17:54.313 sys 0m21.277s 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:54.313 ************************************ 00:17:54.313 END TEST nvmf_initiator_timeout 00:17:54.313 ************************************ 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:54.313 ************************************ 00:17:54.313 START TEST nvmf_nsid 00:17:54.313 ************************************ 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:54.313 * Looking for test storage... 00:17:54.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:54.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.313 --rc genhtml_branch_coverage=1 00:17:54.313 --rc genhtml_function_coverage=1 00:17:54.313 --rc genhtml_legend=1 00:17:54.313 --rc geninfo_all_blocks=1 00:17:54.313 --rc geninfo_unexecuted_blocks=1 00:17:54.313 00:17:54.313 ' 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:54.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.313 --rc genhtml_branch_coverage=1 00:17:54.313 --rc genhtml_function_coverage=1 00:17:54.313 --rc genhtml_legend=1 00:17:54.313 --rc geninfo_all_blocks=1 00:17:54.313 --rc geninfo_unexecuted_blocks=1 00:17:54.313 00:17:54.313 ' 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:54.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.313 --rc genhtml_branch_coverage=1 00:17:54.313 --rc genhtml_function_coverage=1 00:17:54.313 --rc genhtml_legend=1 00:17:54.313 --rc geninfo_all_blocks=1 00:17:54.313 --rc geninfo_unexecuted_blocks=1 00:17:54.313 00:17:54.313 ' 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:54.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.313 --rc genhtml_branch_coverage=1 00:17:54.313 --rc genhtml_function_coverage=1 00:17:54.313 --rc genhtml_legend=1 00:17:54.313 --rc geninfo_all_blocks=1 00:17:54.313 --rc geninfo_unexecuted_blocks=1 00:17:54.313 00:17:54.313 ' 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.313 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:54.314 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:54.314 Cannot find device "nvmf_init_br" 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:54.314 Cannot find device "nvmf_init_br2" 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:54.314 Cannot find device "nvmf_tgt_br" 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:54.314 Cannot find device "nvmf_tgt_br2" 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:54.314 Cannot find device "nvmf_init_br" 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:54.314 Cannot find device "nvmf_init_br2" 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:54.314 Cannot find device "nvmf_tgt_br" 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:54.314 Cannot find device "nvmf_tgt_br2" 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:54.314 Cannot find device "nvmf_br" 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:54.314 Cannot find device "nvmf_init_if" 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:54.314 Cannot find device "nvmf_init_if2" 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:54.314 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:54.314 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:54.314 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:54.315 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:54.315 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:17:54.315 00:17:54.315 --- 10.0.0.3 ping statistics --- 00:17:54.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.315 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:54.315 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:54.315 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:17:54.315 00:17:54.315 --- 10.0.0.4 ping statistics --- 00:17:54.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.315 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:54.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:54.315 00:17:54.315 --- 10.0.0.1 ping statistics --- 00:17:54.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.315 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:54.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:54.315 00:17:54.315 --- 10.0.0.2 ping statistics --- 00:17:54.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.315 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=87760 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 87760 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 87760 ']' 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.315 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:54.315 [2024-11-26 16:23:19.054778] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:17:54.315 [2024-11-26 16:23:19.054893] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.315 [2024-11-26 16:23:19.208134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.315 [2024-11-26 16:23:19.230652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.315 [2024-11-26 16:23:19.230727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.315 [2024-11-26 16:23:19.230747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.315 [2024-11-26 16:23:19.230757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.315 [2024-11-26 16:23:19.230766] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.315 [2024-11-26 16:23:19.231130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.315 [2024-11-26 16:23:19.264468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=87792 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=69bebc73-cb14-4787-9591-8daeac3dd5d1 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=0d3f91bd-fd47-4200-8e98-a7000abd8165 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=4a4dde5b-d0a3-46c0-b7c1-bc62d7fb0f26 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:54.578 null0 00:17:54.578 null1 00:17:54.578 null2 00:17:54.578 [2024-11-26 16:23:20.138559] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.578 [2024-11-26 16:23:20.157518] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:17:54.578 [2024-11-26 16:23:20.157624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87792 ] 00:17:54.578 [2024-11-26 16:23:20.162701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 87792 /var/tmp/tgt2.sock 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 87792 ']' 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.578 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:54.837 [2024-11-26 16:23:20.309611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.837 [2024-11-26 16:23:20.334503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.837 [2024-11-26 16:23:20.377919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:55.095 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.096 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:55.096 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:55.354 [2024-11-26 16:23:20.934297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.354 [2024-11-26 16:23:20.950401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:17:55.354 nvme0n1 nvme0n2 00:17:55.354 nvme1n1 00:17:55.613 16:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:55.613 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:55.613 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:17:55.613 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:55.613 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:55.613 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:55.613 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:55.613 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:17:55.613 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:55.613 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:55.613 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:55.613 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:55.613 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:55.613 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:17:55.613 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:17:55.613 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:17:56.549 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:56.549 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:56.549 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:56.549 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:56.549 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:56.549 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 69bebc73-cb14-4787-9591-8daeac3dd5d1 00:17:56.549 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:56.549 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:17:56.549 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:17:56.549 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:17:56.549 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=69bebc73cb14478795918daeac3dd5d1 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 69BEBC73CB14478795918DAEAC3DD5D1 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 69BEBC73CB14478795918DAEAC3DD5D1 == \6\9\B\E\B\C\7\3\C\B\1\4\4\7\8\7\9\5\9\1\8\D\A\E\A\C\3\D\D\5\D\1 ]] 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 0d3f91bd-fd47-4200-8e98-a7000abd8165 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0d3f91bdfd4742008e98a7000abd8165 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0D3F91BDFD4742008E98A7000ABD8165 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 0D3F91BDFD4742008E98A7000ABD8165 == \0\D\3\F\9\1\B\D\F\D\4\7\4\2\0\0\8\E\9\8\A\7\0\0\0\A\B\D\8\1\6\5 ]] 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 4a4dde5b-d0a3-46c0-b7c1-bc62d7fb0f26 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4a4dde5bd0a346c0b7c1bc62d7fb0f26 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4A4DDE5BD0A346C0B7C1BC62D7FB0F26 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 4A4DDE5BD0A346C0B7C1BC62D7FB0F26 == \4\A\4\D\D\E\5\B\D\0\A\3\4\6\C\0\B\7\C\1\B\C\6\2\D\7\F\B\0\F\2\6 ]] 00:17:56.808 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:17:57.067 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:17:57.067 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:17:57.067 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 87792 00:17:57.067 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 87792 ']' 00:17:57.067 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 87792 00:17:57.067 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:57.067 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.067 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87792 00:17:57.067 killing process with pid 87792 00:17:57.067 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:57.067 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:57.067 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87792' 00:17:57.067 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 87792 00:17:57.067 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 87792 00:17:57.327 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:17:57.327 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:57.327 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:17:57.327 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:57.327 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:17:57.327 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:57.327 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:57.327 rmmod nvme_tcp 00:17:57.327 rmmod nvme_fabrics 00:17:57.327 rmmod nvme_keyring 00:17:57.327 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:57.327 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:17:57.327 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:17:57.327 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 87760 ']' 00:17:57.327 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 87760 00:17:57.327 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 87760 ']' 00:17:57.327 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 87760 00:17:57.327 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:57.587 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.587 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87760 00:17:57.587 killing process with pid 87760 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87760' 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 87760 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 87760 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:57.587 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:57.889 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:57.889 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:57.889 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.889 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.889 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:57.889 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.889 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.889 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.889 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:17:57.889 ************************************ 00:17:57.889 END TEST nvmf_nsid 00:17:57.889 ************************************ 00:17:57.889 00:17:57.889 real 0m4.949s 00:17:57.889 user 0m7.323s 00:17:57.889 sys 0m1.540s 00:17:57.889 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.889 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:57.889 16:23:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:57.889 ************************************ 00:17:57.889 END TEST nvmf_target_extra 00:17:57.889 ************************************ 00:17:57.889 00:17:57.889 real 6m53.951s 00:17:57.889 user 17m11.362s 00:17:57.889 sys 1m50.709s 00:17:57.889 16:23:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.889 16:23:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:57.889 16:23:23 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:57.889 16:23:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:57.889 16:23:23 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.889 16:23:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:57.889 ************************************ 00:17:57.889 START TEST nvmf_host 00:17:57.889 ************************************ 00:17:57.889 16:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:58.174 * Looking for test storage... 00:17:58.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:58.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.174 --rc genhtml_branch_coverage=1 00:17:58.174 --rc genhtml_function_coverage=1 00:17:58.174 --rc genhtml_legend=1 00:17:58.174 --rc geninfo_all_blocks=1 00:17:58.174 --rc geninfo_unexecuted_blocks=1 00:17:58.174 00:17:58.174 ' 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:58.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.174 --rc genhtml_branch_coverage=1 00:17:58.174 --rc genhtml_function_coverage=1 00:17:58.174 --rc genhtml_legend=1 00:17:58.174 --rc geninfo_all_blocks=1 00:17:58.174 --rc geninfo_unexecuted_blocks=1 00:17:58.174 00:17:58.174 ' 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:58.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.174 --rc genhtml_branch_coverage=1 00:17:58.174 --rc genhtml_function_coverage=1 00:17:58.174 --rc genhtml_legend=1 00:17:58.174 --rc geninfo_all_blocks=1 00:17:58.174 --rc geninfo_unexecuted_blocks=1 00:17:58.174 00:17:58.174 ' 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:58.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.174 --rc genhtml_branch_coverage=1 00:17:58.174 --rc genhtml_function_coverage=1 00:17:58.174 --rc genhtml_legend=1 00:17:58.174 --rc geninfo_all_blocks=1 00:17:58.174 --rc geninfo_unexecuted_blocks=1 00:17:58.174 00:17:58.174 ' 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:58.174 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.174 ************************************ 00:17:58.174 START TEST nvmf_identify 00:17:58.174 ************************************ 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:58.174 * Looking for test storage... 00:17:58.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:58.174 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:58.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.434 --rc genhtml_branch_coverage=1 00:17:58.434 --rc genhtml_function_coverage=1 00:17:58.434 --rc genhtml_legend=1 00:17:58.434 --rc geninfo_all_blocks=1 00:17:58.434 --rc geninfo_unexecuted_blocks=1 00:17:58.434 00:17:58.434 ' 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:58.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.434 --rc genhtml_branch_coverage=1 00:17:58.434 --rc genhtml_function_coverage=1 00:17:58.434 --rc genhtml_legend=1 00:17:58.434 --rc geninfo_all_blocks=1 00:17:58.434 --rc geninfo_unexecuted_blocks=1 00:17:58.434 00:17:58.434 ' 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:58.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.434 --rc genhtml_branch_coverage=1 00:17:58.434 --rc genhtml_function_coverage=1 00:17:58.434 --rc genhtml_legend=1 00:17:58.434 --rc geninfo_all_blocks=1 00:17:58.434 --rc geninfo_unexecuted_blocks=1 00:17:58.434 00:17:58.434 ' 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:58.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.434 --rc genhtml_branch_coverage=1 00:17:58.434 --rc genhtml_function_coverage=1 00:17:58.434 --rc genhtml_legend=1 00:17:58.434 --rc geninfo_all_blocks=1 00:17:58.434 --rc geninfo_unexecuted_blocks=1 00:17:58.434 00:17:58.434 ' 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.434 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:58.435 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:58.435 Cannot find device "nvmf_init_br" 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:58.435 Cannot find device "nvmf_init_br2" 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:58.435 Cannot find device "nvmf_tgt_br" 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:58.435 Cannot find device "nvmf_tgt_br2" 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:58.435 Cannot find device "nvmf_init_br" 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:58.435 Cannot find device "nvmf_init_br2" 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:58.435 Cannot find device "nvmf_tgt_br" 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:58.435 Cannot find device "nvmf_tgt_br2" 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:58.435 Cannot find device "nvmf_br" 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:58.435 Cannot find device "nvmf_init_if" 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:58.435 Cannot find device "nvmf_init_if2" 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:58.435 16:23:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:58.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.435 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:58.435 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:58.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.435 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:58.435 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:58.435 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:58.435 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:58.435 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:58.436 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:58.436 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:58.695 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:58.695 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:17:58.695 00:17:58.695 --- 10.0.0.3 ping statistics --- 00:17:58.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.695 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:58.695 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:58.695 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:17:58.695 00:17:58.695 --- 10.0.0.4 ping statistics --- 00:17:58.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.695 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:58.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:58.695 00:17:58.695 --- 10.0.0.1 ping statistics --- 00:17:58.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.695 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:58.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:17:58.695 00:17:58.695 --- 10.0.0.2 ping statistics --- 00:17:58.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.695 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=88147 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 88147 00:17:58.695 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 88147 ']' 00:17:58.696 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.696 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.696 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.696 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.696 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:58.955 [2024-11-26 16:23:24.360204] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:17:58.955 [2024-11-26 16:23:24.360313] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.955 [2024-11-26 16:23:24.513778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:58.955 [2024-11-26 16:23:24.539271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.955 [2024-11-26 16:23:24.539351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.955 [2024-11-26 16:23:24.539366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.955 [2024-11-26 16:23:24.539376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.955 [2024-11-26 16:23:24.539386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.955 [2024-11-26 16:23:24.540243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.955 [2024-11-26 16:23:24.540403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.955 [2024-11-26 16:23:24.540491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:58.955 [2024-11-26 16:23:24.540491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.955 [2024-11-26 16:23:24.574090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:59.214 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.214 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:17:59.214 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:59.214 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.214 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:59.214 [2024-11-26 16:23:24.648862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.214 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:59.215 Malloc0 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:59.215 [2024-11-26 16:23:24.741833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:59.215 [ 00:17:59.215 { 00:17:59.215 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:59.215 "subtype": "Discovery", 00:17:59.215 "listen_addresses": [ 00:17:59.215 { 00:17:59.215 "trtype": "TCP", 00:17:59.215 "adrfam": "IPv4", 00:17:59.215 "traddr": "10.0.0.3", 00:17:59.215 "trsvcid": "4420" 00:17:59.215 } 00:17:59.215 ], 00:17:59.215 "allow_any_host": true, 00:17:59.215 "hosts": [] 00:17:59.215 }, 00:17:59.215 { 00:17:59.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.215 "subtype": "NVMe", 00:17:59.215 "listen_addresses": [ 00:17:59.215 { 00:17:59.215 "trtype": "TCP", 00:17:59.215 "adrfam": "IPv4", 00:17:59.215 "traddr": "10.0.0.3", 00:17:59.215 "trsvcid": "4420" 00:17:59.215 } 00:17:59.215 ], 00:17:59.215 "allow_any_host": true, 00:17:59.215 "hosts": [], 00:17:59.215 "serial_number": "SPDK00000000000001", 00:17:59.215 "model_number": "SPDK bdev Controller", 00:17:59.215 "max_namespaces": 32, 00:17:59.215 "min_cntlid": 1, 00:17:59.215 "max_cntlid": 65519, 00:17:59.215 "namespaces": [ 00:17:59.215 { 00:17:59.215 "nsid": 1, 00:17:59.215 "bdev_name": "Malloc0", 00:17:59.215 "name": "Malloc0", 00:17:59.215 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:59.215 "eui64": "ABCDEF0123456789", 00:17:59.215 "uuid": "773e2767-71e6-4eb8-800c-5e75e5e4f14f" 00:17:59.215 } 00:17:59.215 ] 00:17:59.215 } 00:17:59.215 ] 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.215 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:59.215 [2024-11-26 16:23:24.791050] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:17:59.215 [2024-11-26 16:23:24.791114] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88175 ] 00:17:59.477 [2024-11-26 16:23:24.944419] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:59.477 [2024-11-26 16:23:24.947560] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:59.477 [2024-11-26 16:23:24.947571] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:59.477 [2024-11-26 16:23:24.947585] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:59.477 [2024-11-26 16:23:24.947597] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:59.477 [2024-11-26 16:23:24.947881] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:59.477 [2024-11-26 16:23:24.947944] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x166ca00 0 00:17:59.477 [2024-11-26 16:23:24.955421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:59.477 [2024-11-26 16:23:24.955446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:59.477 [2024-11-26 16:23:24.955469] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:59.477 [2024-11-26 16:23:24.955472] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:59.477 [2024-11-26 16:23:24.955506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.955514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.955518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166ca00) 00:17:59.477 [2024-11-26 16:23:24.955532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:59.477 [2024-11-26 16:23:24.955564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a56c0, cid 0, qid 0 00:17:59.477 [2024-11-26 16:23:24.963439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.477 [2024-11-26 16:23:24.963461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.477 [2024-11-26 16:23:24.963482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.963487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a56c0) on tqpair=0x166ca00 00:17:59.477 [2024-11-26 16:23:24.963502] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:59.477 [2024-11-26 16:23:24.963511] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:59.477 [2024-11-26 16:23:24.963517] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:59.477 [2024-11-26 16:23:24.963534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.963539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.963543] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166ca00) 00:17:59.477 [2024-11-26 16:23:24.963553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.477 [2024-11-26 16:23:24.963581] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a56c0, cid 0, qid 0 00:17:59.477 [2024-11-26 16:23:24.963641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.477 [2024-11-26 16:23:24.963648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.477 [2024-11-26 16:23:24.963652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.963656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a56c0) on tqpair=0x166ca00 00:17:59.477 [2024-11-26 16:23:24.963677] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:59.477 [2024-11-26 16:23:24.963702] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:59.477 [2024-11-26 16:23:24.963710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.963715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.963719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166ca00) 00:17:59.477 [2024-11-26 16:23:24.963727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.477 [2024-11-26 16:23:24.963749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a56c0, cid 0, qid 0 00:17:59.477 [2024-11-26 16:23:24.963817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.477 [2024-11-26 16:23:24.963824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.477 [2024-11-26 16:23:24.963827] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.963832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a56c0) on tqpair=0x166ca00 00:17:59.477 [2024-11-26 16:23:24.963838] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:59.477 [2024-11-26 16:23:24.963847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:59.477 [2024-11-26 16:23:24.963855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.963859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.963863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166ca00) 00:17:59.477 [2024-11-26 16:23:24.963870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.477 [2024-11-26 16:23:24.963890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a56c0, cid 0, qid 0 00:17:59.477 [2024-11-26 16:23:24.963933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.477 [2024-11-26 16:23:24.963940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.477 [2024-11-26 16:23:24.963944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.963948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a56c0) on tqpair=0x166ca00 00:17:59.477 [2024-11-26 16:23:24.963954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:59.477 [2024-11-26 16:23:24.963965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.963969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.963973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166ca00) 00:17:59.477 [2024-11-26 16:23:24.963981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.477 [2024-11-26 16:23:24.964000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a56c0, cid 0, qid 0 00:17:59.477 [2024-11-26 16:23:24.964048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.477 [2024-11-26 16:23:24.964055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.477 [2024-11-26 16:23:24.964059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.964063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a56c0) on tqpair=0x166ca00 00:17:59.477 [2024-11-26 16:23:24.964068] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:59.477 [2024-11-26 16:23:24.964074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:59.477 [2024-11-26 16:23:24.964082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:59.477 [2024-11-26 16:23:24.964192] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:59.477 [2024-11-26 16:23:24.964198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:59.477 [2024-11-26 16:23:24.964208] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.964213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.964216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166ca00) 00:17:59.477 [2024-11-26 16:23:24.964224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.477 [2024-11-26 16:23:24.964245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a56c0, cid 0, qid 0 00:17:59.477 [2024-11-26 16:23:24.964286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.477 [2024-11-26 16:23:24.964293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.477 [2024-11-26 16:23:24.964297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.477 [2024-11-26 16:23:24.964301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a56c0) on tqpair=0x166ca00 00:17:59.477 [2024-11-26 16:23:24.964307] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:59.477 [2024-11-26 16:23:24.964317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964326] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166ca00) 00:17:59.478 [2024-11-26 16:23:24.964333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.478 [2024-11-26 16:23:24.964353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a56c0, cid 0, qid 0 00:17:59.478 [2024-11-26 16:23:24.964416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.478 [2024-11-26 16:23:24.964425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.478 [2024-11-26 16:23:24.964429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a56c0) on tqpair=0x166ca00 00:17:59.478 [2024-11-26 16:23:24.964438] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:59.478 [2024-11-26 16:23:24.964443] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:59.478 [2024-11-26 16:23:24.964452] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:59.478 [2024-11-26 16:23:24.964462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:59.478 [2024-11-26 16:23:24.964473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166ca00) 00:17:59.478 [2024-11-26 16:23:24.964486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.478 [2024-11-26 16:23:24.964508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a56c0, cid 0, qid 0 00:17:59.478 [2024-11-26 16:23:24.964591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:59.478 [2024-11-26 16:23:24.964598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:59.478 [2024-11-26 16:23:24.964602] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964607] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x166ca00): datao=0, datal=4096, cccid=0 00:17:59.478 [2024-11-26 16:23:24.964612] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16a56c0) on tqpair(0x166ca00): expected_datao=0, payload_size=4096 00:17:59.478 [2024-11-26 16:23:24.964617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964625] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964630] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.478 [2024-11-26 16:23:24.964645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.478 [2024-11-26 16:23:24.964649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a56c0) on tqpair=0x166ca00 00:17:59.478 [2024-11-26 16:23:24.964661] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:59.478 [2024-11-26 16:23:24.964667] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:59.478 [2024-11-26 16:23:24.964671] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:59.478 [2024-11-26 16:23:24.964681] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:59.478 [2024-11-26 16:23:24.964686] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:59.478 [2024-11-26 16:23:24.964691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:59.478 [2024-11-26 16:23:24.964730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:59.478 [2024-11-26 16:23:24.964739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166ca00) 00:17:59.478 [2024-11-26 16:23:24.964757] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:59.478 [2024-11-26 16:23:24.964780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a56c0, cid 0, qid 0 00:17:59.478 [2024-11-26 16:23:24.964841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.478 [2024-11-26 16:23:24.964849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.478 [2024-11-26 16:23:24.964853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a56c0) on tqpair=0x166ca00 00:17:59.478 [2024-11-26 16:23:24.964866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x166ca00) 00:17:59.478 [2024-11-26 16:23:24.964882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.478 [2024-11-26 16:23:24.964889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x166ca00) 00:17:59.478 [2024-11-26 16:23:24.964904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.478 [2024-11-26 16:23:24.964911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x166ca00) 00:17:59.478 [2024-11-26 16:23:24.964934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.478 [2024-11-26 16:23:24.964941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166ca00) 00:17:59.478 [2024-11-26 16:23:24.964956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.478 [2024-11-26 16:23:24.964961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:59.478 [2024-11-26 16:23:24.964971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:59.478 [2024-11-26 16:23:24.964978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.964983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x166ca00) 00:17:59.478 [2024-11-26 16:23:24.964990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.478 [2024-11-26 16:23:24.965021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a56c0, cid 0, qid 0 00:17:59.478 [2024-11-26 16:23:24.965044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5840, cid 1, qid 0 00:17:59.478 [2024-11-26 16:23:24.965050] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a59c0, cid 2, qid 0 00:17:59.478 [2024-11-26 16:23:24.965055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5b40, cid 3, qid 0 00:17:59.478 [2024-11-26 16:23:24.965059] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5cc0, cid 4, qid 0 00:17:59.478 [2024-11-26 16:23:24.965156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.478 [2024-11-26 16:23:24.965169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.478 [2024-11-26 16:23:24.965173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.965177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5cc0) on tqpair=0x166ca00 00:17:59.478 [2024-11-26 16:23:24.965183] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:59.478 [2024-11-26 16:23:24.965189] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:59.478 [2024-11-26 16:23:24.965201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.965206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x166ca00) 00:17:59.478 [2024-11-26 16:23:24.965231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.478 [2024-11-26 16:23:24.965252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5cc0, cid 4, qid 0 00:17:59.478 [2024-11-26 16:23:24.965311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:59.478 [2024-11-26 16:23:24.965319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:59.478 [2024-11-26 16:23:24.965323] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.965327] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x166ca00): datao=0, datal=4096, cccid=4 00:17:59.478 [2024-11-26 16:23:24.965331] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16a5cc0) on tqpair(0x166ca00): expected_datao=0, payload_size=4096 00:17:59.478 [2024-11-26 16:23:24.965336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.965344] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.965348] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.965386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.478 [2024-11-26 16:23:24.965395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.478 [2024-11-26 16:23:24.965399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.965404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5cc0) on tqpair=0x166ca00 00:17:59.478 [2024-11-26 16:23:24.965418] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:59.478 [2024-11-26 16:23:24.965445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.965452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x166ca00) 00:17:59.478 [2024-11-26 16:23:24.965460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.478 [2024-11-26 16:23:24.965468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.478 [2024-11-26 16:23:24.965473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.965477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x166ca00) 00:17:59.479 [2024-11-26 16:23:24.965483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.479 [2024-11-26 16:23:24.965526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5cc0, cid 4, qid 0 00:17:59.479 [2024-11-26 16:23:24.965535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5e40, cid 5, qid 0 00:17:59.479 [2024-11-26 16:23:24.965642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:59.479 [2024-11-26 16:23:24.965649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:59.479 [2024-11-26 16:23:24.965653] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.965657] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x166ca00): datao=0, datal=1024, cccid=4 00:17:59.479 [2024-11-26 16:23:24.965662] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16a5cc0) on tqpair(0x166ca00): expected_datao=0, payload_size=1024 00:17:59.479 [2024-11-26 16:23:24.965667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.965674] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.965679] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.965685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.479 [2024-11-26 16:23:24.965691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.479 [2024-11-26 16:23:24.965695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.965699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5e40) on tqpair=0x166ca00 00:17:59.479 [2024-11-26 16:23:24.965733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.479 [2024-11-26 16:23:24.965741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.479 [2024-11-26 16:23:24.965744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.965748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5cc0) on tqpair=0x166ca00 00:17:59.479 [2024-11-26 16:23:24.965765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.965771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x166ca00) 00:17:59.479 [2024-11-26 16:23:24.965779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.479 [2024-11-26 16:23:24.965805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5cc0, cid 4, qid 0 00:17:59.479 [2024-11-26 16:23:24.965873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:59.479 [2024-11-26 16:23:24.965880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:59.479 [2024-11-26 16:23:24.965884] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.965887] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x166ca00): datao=0, datal=3072, cccid=4 00:17:59.479 [2024-11-26 16:23:24.965892] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16a5cc0) on tqpair(0x166ca00): expected_datao=0, payload_size=3072 00:17:59.479 [2024-11-26 16:23:24.965897] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.965904] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.965908] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.965916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.479 [2024-11-26 16:23:24.965923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.479 [2024-11-26 16:23:24.965926] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.965930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5cc0) on tqpair=0x166ca00 00:17:59.479 [2024-11-26 16:23:24.965940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.965945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x166ca00) 00:17:59.479 [2024-11-26 16:23:24.965953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.479 [2024-11-26 16:23:24.965978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5cc0, cid 4, qid 0 00:17:59.479 [2024-11-26 16:23:24.966036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:59.479 [2024-11-26 16:23:24.966043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:59.479 [2024-11-26 16:23:24.966047] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.966051] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x166ca00): datao=0, datal=8, cccid=4 00:17:59.479 [2024-11-26 16:23:24.966055] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16a5cc0) on tqpair(0x166ca00): expected_datao=0, payload_size=8 00:17:59.479 ===================================================== 00:17:59.479 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:59.479 ===================================================== 00:17:59.479 Controller Capabilities/Features 00:17:59.479 ================================ 00:17:59.479 Vendor ID: 0000 00:17:59.479 Subsystem Vendor ID: 0000 00:17:59.479 Serial Number: .................... 00:17:59.479 Model Number: ........................................ 00:17:59.479 Firmware Version: 25.01 00:17:59.479 Recommended Arb Burst: 0 00:17:59.479 IEEE OUI Identifier: 00 00 00 00:17:59.479 Multi-path I/O 00:17:59.479 May have multiple subsystem ports: No 00:17:59.479 May have multiple controllers: No 00:17:59.479 Associated with SR-IOV VF: No 00:17:59.479 Max Data Transfer Size: 131072 00:17:59.479 Max Number of Namespaces: 0 00:17:59.479 Max Number of I/O Queues: 1024 00:17:59.479 NVMe Specification Version (VS): 1.3 00:17:59.479 NVMe Specification Version (Identify): 1.3 00:17:59.479 Maximum Queue Entries: 128 00:17:59.479 Contiguous Queues Required: Yes 00:17:59.479 Arbitration Mechanisms Supported 00:17:59.479 Weighted Round Robin: Not Supported 00:17:59.479 Vendor Specific: Not Supported 00:17:59.479 Reset Timeout: 15000 ms 00:17:59.479 Doorbell Stride: 4 bytes 00:17:59.479 NVM Subsystem Reset: Not Supported 00:17:59.479 Command Sets Supported 00:17:59.479 NVM Command Set: Supported 00:17:59.479 Boot Partition: Not Supported 00:17:59.479 Memory Page Size Minimum: 4096 bytes 00:17:59.479 Memory Page Size Maximum: 4096 bytes 00:17:59.479 Persistent Memory Region: Not Supported 00:17:59.479 Optional Asynchronous Events Supported 00:17:59.479 Namespace Attribute Notices: Not Supported 00:17:59.479 Firmware Activation Notices: Not Supported 00:17:59.479 ANA Change Notices: Not Supported 00:17:59.479 PLE Aggregate Log Change Notices: Not Supported 00:17:59.479 LBA Status Info Alert Notices: Not Supported 00:17:59.479 EGE Aggregate Log Change Notices: Not Supported 00:17:59.479 Normal NVM Subsystem Shutdown event: Not Supported 00:17:59.479 Zone Descriptor Change Notices: Not Supported 00:17:59.479 Discovery Log Change Notices: Supported 00:17:59.479 Controller Attributes 00:17:59.479 128-bit Host Identifier: Not Supported 00:17:59.479 Non-Operational Permissive Mode: Not Supported 00:17:59.479 NVM Sets: Not Supported 00:17:59.479 Read Recovery Levels: Not Supported 00:17:59.479 Endurance Groups: Not Supported 00:17:59.479 Predictable Latency Mode: Not Supported 00:17:59.479 Traffic Based Keep ALive: Not Supported 00:17:59.479 Namespace Granularity: Not Supported 00:17:59.479 SQ Associations: Not Supported 00:17:59.479 UUID List: Not Supported 00:17:59.479 Multi-Domain Subsystem: Not Supported 00:17:59.479 Fixed Capacity Management: Not Supported 00:17:59.479 Variable Capacity Management: Not Supported 00:17:59.479 Delete Endurance Group: Not Supported 00:17:59.479 Delete NVM Set: Not Supported 00:17:59.479 Extended LBA Formats Supported: Not Supported 00:17:59.479 Flexible Data Placement Supported: Not Supported 00:17:59.479 00:17:59.479 Controller Memory Buffer Support 00:17:59.479 ================================ 00:17:59.479 Supported: No 00:17:59.479 00:17:59.479 Persistent Memory Region Support 00:17:59.479 ================================ 00:17:59.479 Supported: No 00:17:59.479 00:17:59.479 Admin Command Set Attributes 00:17:59.479 ============================ 00:17:59.479 Security Send/Receive: Not Supported 00:17:59.479 Format NVM: Not Supported 00:17:59.479 Firmware Activate/Download: Not Supported 00:17:59.479 Namespace Management: Not Supported 00:17:59.479 Device Self-Test: Not Supported 00:17:59.479 Directives: Not Supported 00:17:59.479 NVMe-MI: Not Supported 00:17:59.479 Virtualization Management: Not Supported 00:17:59.479 Doorbell Buffer Config: Not Supported 00:17:59.479 Get LBA Status Capability: Not Supported 00:17:59.479 Command & Feature Lockdown Capability: Not Supported 00:17:59.479 Abort Command Limit: 1 00:17:59.479 Async Event Request Limit: 4 00:17:59.479 Number of Firmware Slots: N/A 00:17:59.479 Firmware Slot 1 Read-Only: N/A 00:17:59.479 Firm[2024-11-26 16:23:24.966060] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.966084] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.966088] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.966104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.479 [2024-11-26 16:23:24.966112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.479 [2024-11-26 16:23:24.966115] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.479 [2024-11-26 16:23:24.966120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5cc0) on tqpair=0x166ca00 00:17:59.479 ware Activation Without Reset: N/A 00:17:59.479 Multiple Update Detection Support: N/A 00:17:59.479 Firmware Update Granularity: No Information Provided 00:17:59.479 Per-Namespace SMART Log: No 00:17:59.479 Asymmetric Namespace Access Log Page: Not Supported 00:17:59.479 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:59.479 Command Effects Log Page: Not Supported 00:17:59.479 Get Log Page Extended Data: Supported 00:17:59.479 Telemetry Log Pages: Not Supported 00:17:59.480 Persistent Event Log Pages: Not Supported 00:17:59.480 Supported Log Pages Log Page: May Support 00:17:59.480 Commands Supported & Effects Log Page: Not Supported 00:17:59.480 Feature Identifiers & Effects Log Page:May Support 00:17:59.480 NVMe-MI Commands & Effects Log Page: May Support 00:17:59.480 Data Area 4 for Telemetry Log: Not Supported 00:17:59.480 Error Log Page Entries Supported: 128 00:17:59.480 Keep Alive: Not Supported 00:17:59.480 00:17:59.480 NVM Command Set Attributes 00:17:59.480 ========================== 00:17:59.480 Submission Queue Entry Size 00:17:59.480 Max: 1 00:17:59.480 Min: 1 00:17:59.480 Completion Queue Entry Size 00:17:59.480 Max: 1 00:17:59.480 Min: 1 00:17:59.480 Number of Namespaces: 0 00:17:59.480 Compare Command: Not Supported 00:17:59.480 Write Uncorrectable Command: Not Supported 00:17:59.480 Dataset Management Command: Not Supported 00:17:59.480 Write Zeroes Command: Not Supported 00:17:59.480 Set Features Save Field: Not Supported 00:17:59.480 Reservations: Not Supported 00:17:59.480 Timestamp: Not Supported 00:17:59.480 Copy: Not Supported 00:17:59.480 Volatile Write Cache: Not Present 00:17:59.480 Atomic Write Unit (Normal): 1 00:17:59.480 Atomic Write Unit (PFail): 1 00:17:59.480 Atomic Compare & Write Unit: 1 00:17:59.480 Fused Compare & Write: Supported 00:17:59.480 Scatter-Gather List 00:17:59.480 SGL Command Set: Supported 00:17:59.480 SGL Keyed: Supported 00:17:59.480 SGL Bit Bucket Descriptor: Not Supported 00:17:59.480 SGL Metadata Pointer: Not Supported 00:17:59.480 Oversized SGL: Not Supported 00:17:59.480 SGL Metadata Address: Not Supported 00:17:59.480 SGL Offset: Supported 00:17:59.480 Transport SGL Data Block: Not Supported 00:17:59.480 Replay Protected Memory Block: Not Supported 00:17:59.480 00:17:59.480 Firmware Slot Information 00:17:59.480 ========================= 00:17:59.480 Active slot: 0 00:17:59.480 00:17:59.480 00:17:59.480 Error Log 00:17:59.480 ========= 00:17:59.480 00:17:59.480 Active Namespaces 00:17:59.480 ================= 00:17:59.480 Discovery Log Page 00:17:59.480 ================== 00:17:59.480 Generation Counter: 2 00:17:59.480 Number of Records: 2 00:17:59.480 Record Format: 0 00:17:59.480 00:17:59.480 Discovery Log Entry 0 00:17:59.480 ---------------------- 00:17:59.480 Transport Type: 3 (TCP) 00:17:59.480 Address Family: 1 (IPv4) 00:17:59.480 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:59.480 Entry Flags: 00:17:59.480 Duplicate Returned Information: 1 00:17:59.480 Explicit Persistent Connection Support for Discovery: 1 00:17:59.480 Transport Requirements: 00:17:59.480 Secure Channel: Not Required 00:17:59.480 Port ID: 0 (0x0000) 00:17:59.480 Controller ID: 65535 (0xffff) 00:17:59.480 Admin Max SQ Size: 128 00:17:59.480 Transport Service Identifier: 4420 00:17:59.480 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:59.480 Transport Address: 10.0.0.3 00:17:59.480 Discovery Log Entry 1 00:17:59.480 ---------------------- 00:17:59.480 Transport Type: 3 (TCP) 00:17:59.480 Address Family: 1 (IPv4) 00:17:59.480 Subsystem Type: 2 (NVM Subsystem) 00:17:59.480 Entry Flags: 00:17:59.480 Duplicate Returned Information: 0 00:17:59.480 Explicit Persistent Connection Support for Discovery: 0 00:17:59.480 Transport Requirements: 00:17:59.480 Secure Channel: Not Required 00:17:59.480 Port ID: 0 (0x0000) 00:17:59.480 Controller ID: 65535 (0xffff) 00:17:59.480 Admin Max SQ Size: 128 00:17:59.480 Transport Service Identifier: 4420 00:17:59.480 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:59.480 Transport Address: 10.0.0.3 [2024-11-26 16:23:24.966209] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:59.480 [2024-11-26 16:23:24.966222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a56c0) on tqpair=0x166ca00 00:17:59.480 [2024-11-26 16:23:24.966230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.480 [2024-11-26 16:23:24.966236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5840) on tqpair=0x166ca00 00:17:59.480 [2024-11-26 16:23:24.966241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.480 [2024-11-26 16:23:24.966246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a59c0) on tqpair=0x166ca00 00:17:59.480 [2024-11-26 16:23:24.966251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.480 [2024-11-26 16:23:24.966256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5b40) on tqpair=0x166ca00 00:17:59.480 [2024-11-26 16:23:24.966261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.480 [2024-11-26 16:23:24.966273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.480 [2024-11-26 16:23:24.966278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.480 [2024-11-26 16:23:24.966282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166ca00) 00:17:59.480 [2024-11-26 16:23:24.966291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.480 [2024-11-26 16:23:24.966314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5b40, cid 3, qid 0 00:17:59.480 [2024-11-26 16:23:24.966359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.480 [2024-11-26 16:23:24.966367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.480 [2024-11-26 16:23:24.966371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.480 [2024-11-26 16:23:24.966375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5b40) on tqpair=0x166ca00 00:17:59.480 [2024-11-26 16:23:24.966384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.480 [2024-11-26 16:23:24.966417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.480 [2024-11-26 16:23:24.966422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166ca00) 00:17:59.480 [2024-11-26 16:23:24.966429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.480 [2024-11-26 16:23:24.966455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5b40, cid 3, qid 0 00:17:59.480 [2024-11-26 16:23:24.966514] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.480 [2024-11-26 16:23:24.966521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.480 [2024-11-26 16:23:24.966525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.480 [2024-11-26 16:23:24.966529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5b40) on tqpair=0x166ca00 00:17:59.480 [2024-11-26 16:23:24.966534] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:59.480 [2024-11-26 16:23:24.966539] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:59.480 [2024-11-26 16:23:24.966549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.480 [2024-11-26 16:23:24.966554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.480 [2024-11-26 16:23:24.966558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166ca00) 00:17:59.480 [2024-11-26 16:23:24.966566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.480 [2024-11-26 16:23:24.966585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5b40, cid 3, qid 0 00:17:59.480 [2024-11-26 16:23:24.966632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.480 [2024-11-26 16:23:24.966639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.480 [2024-11-26 16:23:24.966643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.480 [2024-11-26 16:23:24.966647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5b40) on tqpair=0x166ca00 00:17:59.480 [2024-11-26 16:23:24.966658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.480 [2024-11-26 16:23:24.966663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.480 [2024-11-26 16:23:24.966667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166ca00) 00:17:59.480 [2024-11-26 16:23:24.966674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.480 [2024-11-26 16:23:24.966693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5b40, cid 3, qid 0 00:17:59.480 [2024-11-26 16:23:24.966737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.480 [2024-11-26 16:23:24.966744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.480 [2024-11-26 16:23:24.966748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.480 [2024-11-26 16:23:24.966752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5b40) on tqpair=0x166ca00 00:17:59.480 [2024-11-26 16:23:24.966763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.480 [2024-11-26 16:23:24.966768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.480 [2024-11-26 16:23:24.966771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166ca00) 00:17:59.480 [2024-11-26 16:23:24.966779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.480 [2024-11-26 16:23:24.966797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5b40, cid 3, qid 0 00:17:59.480 [2024-11-26 16:23:24.966838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.480 [2024-11-26 16:23:24.966845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.480 [2024-11-26 16:23:24.966849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.480 [2024-11-26 16:23:24.966853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5b40) on tqpair=0x166ca00 00:17:59.480 [2024-11-26 16:23:24.966864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.480 [2024-11-26 16:23:24.966869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.966872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166ca00) 00:17:59.481 [2024-11-26 16:23:24.966880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.481 [2024-11-26 16:23:24.966898] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5b40, cid 3, qid 0 00:17:59.481 [2024-11-26 16:23:24.966939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.481 [2024-11-26 16:23:24.966946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.481 [2024-11-26 16:23:24.966950] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.966954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5b40) on tqpair=0x166ca00 00:17:59.481 [2024-11-26 16:23:24.966964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.966969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.966973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166ca00) 00:17:59.481 [2024-11-26 16:23:24.966981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.481 [2024-11-26 16:23:24.966999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5b40, cid 3, qid 0 00:17:59.481 [2024-11-26 16:23:24.967043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.481 [2024-11-26 16:23:24.967050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.481 [2024-11-26 16:23:24.967054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.967058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5b40) on tqpair=0x166ca00 00:17:59.481 [2024-11-26 16:23:24.967085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.967090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.967094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166ca00) 00:17:59.481 [2024-11-26 16:23:24.967102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.481 [2024-11-26 16:23:24.967121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5b40, cid 3, qid 0 00:17:59.481 [2024-11-26 16:23:24.967169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.481 [2024-11-26 16:23:24.967176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.481 [2024-11-26 16:23:24.967180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.967185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5b40) on tqpair=0x166ca00 00:17:59.481 [2024-11-26 16:23:24.967195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.967201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.967205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166ca00) 00:17:59.481 [2024-11-26 16:23:24.967212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.481 [2024-11-26 16:23:24.967231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5b40, cid 3, qid 0 00:17:59.481 [2024-11-26 16:23:24.967276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.481 [2024-11-26 16:23:24.967283] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.481 [2024-11-26 16:23:24.967287] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.967291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5b40) on tqpair=0x166ca00 00:17:59.481 [2024-11-26 16:23:24.967302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.967307] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.967311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166ca00) 00:17:59.481 [2024-11-26 16:23:24.967319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.481 [2024-11-26 16:23:24.967338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5b40, cid 3, qid 0 00:17:59.481 [2024-11-26 16:23:24.970467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.481 [2024-11-26 16:23:24.970491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.481 [2024-11-26 16:23:24.970496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.970501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5b40) on tqpair=0x166ca00 00:17:59.481 [2024-11-26 16:23:24.970516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.970522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.970526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x166ca00) 00:17:59.481 [2024-11-26 16:23:24.970535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.481 [2024-11-26 16:23:24.970564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a5b40, cid 3, qid 0 00:17:59.481 [2024-11-26 16:23:24.970616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.481 [2024-11-26 16:23:24.970624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.481 [2024-11-26 16:23:24.970628] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.481 [2024-11-26 16:23:24.970632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a5b40) on tqpair=0x166ca00 00:17:59.481 [2024-11-26 16:23:24.970641] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:17:59.481 00:17:59.481 16:23:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:59.481 [2024-11-26 16:23:25.012258] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:17:59.481 [2024-11-26 16:23:25.012313] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88181 ] 00:17:59.745 [2024-11-26 16:23:25.170002] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:59.745 [2024-11-26 16:23:25.170086] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:59.745 [2024-11-26 16:23:25.170093] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:59.745 [2024-11-26 16:23:25.170107] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:59.745 [2024-11-26 16:23:25.170118] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:59.745 [2024-11-26 16:23:25.170420] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:59.745 [2024-11-26 16:23:25.170478] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x204ea00 0 00:17:59.745 [2024-11-26 16:23:25.176405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:59.745 [2024-11-26 16:23:25.176427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:59.745 [2024-11-26 16:23:25.176433] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:59.745 [2024-11-26 16:23:25.176437] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:59.745 [2024-11-26 16:23:25.176470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.745 [2024-11-26 16:23:25.176478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.745 [2024-11-26 16:23:25.176482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204ea00) 00:17:59.745 [2024-11-26 16:23:25.176495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:59.745 [2024-11-26 16:23:25.176528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20876c0, cid 0, qid 0 00:17:59.745 [2024-11-26 16:23:25.184360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.745 [2024-11-26 16:23:25.184380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.745 [2024-11-26 16:23:25.184401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.745 [2024-11-26 16:23:25.184406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20876c0) on tqpair=0x204ea00 00:17:59.745 [2024-11-26 16:23:25.184420] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:59.745 [2024-11-26 16:23:25.184429] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:59.745 [2024-11-26 16:23:25.184436] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:59.745 [2024-11-26 16:23:25.184452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.745 [2024-11-26 16:23:25.184457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.745 [2024-11-26 16:23:25.184461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204ea00) 00:17:59.745 [2024-11-26 16:23:25.184471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.745 [2024-11-26 16:23:25.184499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20876c0, cid 0, qid 0 00:17:59.745 [2024-11-26 16:23:25.184550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.745 [2024-11-26 16:23:25.184557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.745 [2024-11-26 16:23:25.184560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.745 [2024-11-26 16:23:25.184565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20876c0) on tqpair=0x204ea00 00:17:59.745 [2024-11-26 16:23:25.184570] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:59.745 [2024-11-26 16:23:25.184578] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:59.745 [2024-11-26 16:23:25.184585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.745 [2024-11-26 16:23:25.184590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.745 [2024-11-26 16:23:25.184594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204ea00) 00:17:59.745 [2024-11-26 16:23:25.184601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.745 [2024-11-26 16:23:25.184636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20876c0, cid 0, qid 0 00:17:59.745 [2024-11-26 16:23:25.184705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.745 [2024-11-26 16:23:25.184730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.745 [2024-11-26 16:23:25.184734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.745 [2024-11-26 16:23:25.184739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20876c0) on tqpair=0x204ea00 00:17:59.745 [2024-11-26 16:23:25.184746] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:59.745 [2024-11-26 16:23:25.184755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:59.745 [2024-11-26 16:23:25.184764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.184769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.184773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204ea00) 00:17:59.746 [2024-11-26 16:23:25.184781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.746 [2024-11-26 16:23:25.184802] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20876c0, cid 0, qid 0 00:17:59.746 [2024-11-26 16:23:25.184849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.746 [2024-11-26 16:23:25.184857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.746 [2024-11-26 16:23:25.184861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.184866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20876c0) on tqpair=0x204ea00 00:17:59.746 [2024-11-26 16:23:25.184872] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:59.746 [2024-11-26 16:23:25.184883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.184889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.184893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204ea00) 00:17:59.746 [2024-11-26 16:23:25.184901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.746 [2024-11-26 16:23:25.184920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20876c0, cid 0, qid 0 00:17:59.746 [2024-11-26 16:23:25.184964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.746 [2024-11-26 16:23:25.184971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.746 [2024-11-26 16:23:25.184975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.184980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20876c0) on tqpair=0x204ea00 00:17:59.746 [2024-11-26 16:23:25.184985] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:59.746 [2024-11-26 16:23:25.184991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:59.746 [2024-11-26 16:23:25.185000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:59.746 [2024-11-26 16:23:25.185111] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:59.746 [2024-11-26 16:23:25.185118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:59.746 [2024-11-26 16:23:25.185127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204ea00) 00:17:59.746 [2024-11-26 16:23:25.185144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.746 [2024-11-26 16:23:25.185165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20876c0, cid 0, qid 0 00:17:59.746 [2024-11-26 16:23:25.185209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.746 [2024-11-26 16:23:25.185216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.746 [2024-11-26 16:23:25.185220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20876c0) on tqpair=0x204ea00 00:17:59.746 [2024-11-26 16:23:25.185246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:59.746 [2024-11-26 16:23:25.185258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204ea00) 00:17:59.746 [2024-11-26 16:23:25.185276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.746 [2024-11-26 16:23:25.185295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20876c0, cid 0, qid 0 00:17:59.746 [2024-11-26 16:23:25.185340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.746 [2024-11-26 16:23:25.185348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.746 [2024-11-26 16:23:25.185352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20876c0) on tqpair=0x204ea00 00:17:59.746 [2024-11-26 16:23:25.185362] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:59.746 [2024-11-26 16:23:25.185368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:59.746 [2024-11-26 16:23:25.185377] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:59.746 [2024-11-26 16:23:25.185388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:59.746 [2024-11-26 16:23:25.185399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204ea00) 00:17:59.746 [2024-11-26 16:23:25.185426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.746 [2024-11-26 16:23:25.185451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20876c0, cid 0, qid 0 00:17:59.746 [2024-11-26 16:23:25.185548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:59.746 [2024-11-26 16:23:25.185556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:59.746 [2024-11-26 16:23:25.185560] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185564] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x204ea00): datao=0, datal=4096, cccid=0 00:17:59.746 [2024-11-26 16:23:25.185570] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20876c0) on tqpair(0x204ea00): expected_datao=0, payload_size=4096 00:17:59.746 [2024-11-26 16:23:25.185576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185584] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185589] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.746 [2024-11-26 16:23:25.185605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.746 [2024-11-26 16:23:25.185609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20876c0) on tqpair=0x204ea00 00:17:59.746 [2024-11-26 16:23:25.185623] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:59.746 [2024-11-26 16:23:25.185629] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:59.746 [2024-11-26 16:23:25.185634] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:59.746 [2024-11-26 16:23:25.185644] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:59.746 [2024-11-26 16:23:25.185650] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:59.746 [2024-11-26 16:23:25.185655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:59.746 [2024-11-26 16:23:25.185681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:59.746 [2024-11-26 16:23:25.185689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185694] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204ea00) 00:17:59.746 [2024-11-26 16:23:25.185722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:59.746 [2024-11-26 16:23:25.185743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20876c0, cid 0, qid 0 00:17:59.746 [2024-11-26 16:23:25.185788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.746 [2024-11-26 16:23:25.185795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.746 [2024-11-26 16:23:25.185799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20876c0) on tqpair=0x204ea00 00:17:59.746 [2024-11-26 16:23:25.185811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x204ea00) 00:17:59.746 [2024-11-26 16:23:25.185827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.746 [2024-11-26 16:23:25.185834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x204ea00) 00:17:59.746 [2024-11-26 16:23:25.185848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.746 [2024-11-26 16:23:25.185854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x204ea00) 00:17:59.746 [2024-11-26 16:23:25.185868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.746 [2024-11-26 16:23:25.185875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.746 [2024-11-26 16:23:25.185883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.746 [2024-11-26 16:23:25.185889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.746 [2024-11-26 16:23:25.185894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:59.746 [2024-11-26 16:23:25.185903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:59.746 [2024-11-26 16:23:25.185910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.185915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x204ea00) 00:17:59.747 [2024-11-26 16:23:25.185922] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.747 [2024-11-26 16:23:25.185947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20876c0, cid 0, qid 0 00:17:59.747 [2024-11-26 16:23:25.185955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087840, cid 1, qid 0 00:17:59.747 [2024-11-26 16:23:25.185960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20879c0, cid 2, qid 0 00:17:59.747 [2024-11-26 16:23:25.185966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.747 [2024-11-26 16:23:25.185971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087cc0, cid 4, qid 0 00:17:59.747 [2024-11-26 16:23:25.186053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.747 [2024-11-26 16:23:25.186060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.747 [2024-11-26 16:23:25.186064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087cc0) on tqpair=0x204ea00 00:17:59.747 [2024-11-26 16:23:25.186074] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:59.747 [2024-11-26 16:23:25.186080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:59.747 [2024-11-26 16:23:25.186089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:59.747 [2024-11-26 16:23:25.186096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:59.747 [2024-11-26 16:23:25.186103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x204ea00) 00:17:59.747 [2024-11-26 16:23:25.186119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:59.747 [2024-11-26 16:23:25.186138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087cc0, cid 4, qid 0 00:17:59.747 [2024-11-26 16:23:25.186188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.747 [2024-11-26 16:23:25.186195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.747 [2024-11-26 16:23:25.186199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087cc0) on tqpair=0x204ea00 00:17:59.747 [2024-11-26 16:23:25.186268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:59.747 [2024-11-26 16:23:25.186280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:59.747 [2024-11-26 16:23:25.186289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x204ea00) 00:17:59.747 [2024-11-26 16:23:25.186301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.747 [2024-11-26 16:23:25.186320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087cc0, cid 4, qid 0 00:17:59.747 [2024-11-26 16:23:25.186393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:59.747 [2024-11-26 16:23:25.186402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:59.747 [2024-11-26 16:23:25.186406] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186410] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x204ea00): datao=0, datal=4096, cccid=4 00:17:59.747 [2024-11-26 16:23:25.186415] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2087cc0) on tqpair(0x204ea00): expected_datao=0, payload_size=4096 00:17:59.747 [2024-11-26 16:23:25.186420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186428] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186432] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.747 [2024-11-26 16:23:25.186447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.747 [2024-11-26 16:23:25.186450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186455] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087cc0) on tqpair=0x204ea00 00:17:59.747 [2024-11-26 16:23:25.186465] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:59.747 [2024-11-26 16:23:25.186477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:59.747 [2024-11-26 16:23:25.186488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:59.747 [2024-11-26 16:23:25.186497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x204ea00) 00:17:59.747 [2024-11-26 16:23:25.186509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.747 [2024-11-26 16:23:25.186531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087cc0, cid 4, qid 0 00:17:59.747 [2024-11-26 16:23:25.186676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:59.747 [2024-11-26 16:23:25.186683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:59.747 [2024-11-26 16:23:25.186687] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186691] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x204ea00): datao=0, datal=4096, cccid=4 00:17:59.747 [2024-11-26 16:23:25.186696] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2087cc0) on tqpair(0x204ea00): expected_datao=0, payload_size=4096 00:17:59.747 [2024-11-26 16:23:25.186701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186708] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186712] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.747 [2024-11-26 16:23:25.186727] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.747 [2024-11-26 16:23:25.186730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087cc0) on tqpair=0x204ea00 00:17:59.747 [2024-11-26 16:23:25.186753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:59.747 [2024-11-26 16:23:25.186765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:59.747 [2024-11-26 16:23:25.186774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x204ea00) 00:17:59.747 [2024-11-26 16:23:25.186786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.747 [2024-11-26 16:23:25.186806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087cc0, cid 4, qid 0 00:17:59.747 [2024-11-26 16:23:25.186864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:59.747 [2024-11-26 16:23:25.186876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:59.747 [2024-11-26 16:23:25.186880] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186884] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x204ea00): datao=0, datal=4096, cccid=4 00:17:59.747 [2024-11-26 16:23:25.186889] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2087cc0) on tqpair(0x204ea00): expected_datao=0, payload_size=4096 00:17:59.747 [2024-11-26 16:23:25.186894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186902] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186906] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.747 [2024-11-26 16:23:25.186921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.747 [2024-11-26 16:23:25.186925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.186929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087cc0) on tqpair=0x204ea00 00:17:59.747 [2024-11-26 16:23:25.186938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:59.747 [2024-11-26 16:23:25.186948] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:59.747 [2024-11-26 16:23:25.186959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:59.747 [2024-11-26 16:23:25.186966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:59.747 [2024-11-26 16:23:25.186972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:59.747 [2024-11-26 16:23:25.186978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:59.747 [2024-11-26 16:23:25.186983] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:59.747 [2024-11-26 16:23:25.186988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:59.747 [2024-11-26 16:23:25.186995] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:59.747 [2024-11-26 16:23:25.187010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.187015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x204ea00) 00:17:59.747 [2024-11-26 16:23:25.187023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.747 [2024-11-26 16:23:25.187030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.187035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.747 [2024-11-26 16:23:25.187039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x204ea00) 00:17:59.747 [2024-11-26 16:23:25.187045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.747 [2024-11-26 16:23:25.187070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087cc0, cid 4, qid 0 00:17:59.747 [2024-11-26 16:23:25.187078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087e40, cid 5, qid 0 00:17:59.747 [2024-11-26 16:23:25.187136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.748 [2024-11-26 16:23:25.187143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.748 [2024-11-26 16:23:25.187147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087cc0) on tqpair=0x204ea00 00:17:59.748 [2024-11-26 16:23:25.187158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.748 [2024-11-26 16:23:25.187164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.748 [2024-11-26 16:23:25.187168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087e40) on tqpair=0x204ea00 00:17:59.748 [2024-11-26 16:23:25.187182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187187] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x204ea00) 00:17:59.748 [2024-11-26 16:23:25.187195] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.748 [2024-11-26 16:23:25.187213] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087e40, cid 5, qid 0 00:17:59.748 [2024-11-26 16:23:25.187258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.748 [2024-11-26 16:23:25.187265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.748 [2024-11-26 16:23:25.187269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087e40) on tqpair=0x204ea00 00:17:59.748 [2024-11-26 16:23:25.187284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x204ea00) 00:17:59.748 [2024-11-26 16:23:25.187296] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.748 [2024-11-26 16:23:25.187313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087e40, cid 5, qid 0 00:17:59.748 [2024-11-26 16:23:25.187373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.748 [2024-11-26 16:23:25.187382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.748 [2024-11-26 16:23:25.187385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087e40) on tqpair=0x204ea00 00:17:59.748 [2024-11-26 16:23:25.187401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x204ea00) 00:17:59.748 [2024-11-26 16:23:25.187413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.748 [2024-11-26 16:23:25.187434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087e40, cid 5, qid 0 00:17:59.748 [2024-11-26 16:23:25.187475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.748 [2024-11-26 16:23:25.187482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.748 [2024-11-26 16:23:25.187486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087e40) on tqpair=0x204ea00 00:17:59.748 [2024-11-26 16:23:25.187508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x204ea00) 00:17:59.748 [2024-11-26 16:23:25.187522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.748 [2024-11-26 16:23:25.187530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x204ea00) 00:17:59.748 [2024-11-26 16:23:25.187541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.748 [2024-11-26 16:23:25.187548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x204ea00) 00:17:59.748 [2024-11-26 16:23:25.187559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.748 [2024-11-26 16:23:25.187567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x204ea00) 00:17:59.748 [2024-11-26 16:23:25.187578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.748 [2024-11-26 16:23:25.187598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087e40, cid 5, qid 0 00:17:59.748 [2024-11-26 16:23:25.187605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087cc0, cid 4, qid 0 00:17:59.748 [2024-11-26 16:23:25.187611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087fc0, cid 6, qid 0 00:17:59.748 [2024-11-26 16:23:25.187616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2088140, cid 7, qid 0 00:17:59.748 [2024-11-26 16:23:25.187746] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:59.748 [2024-11-26 16:23:25.187753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:59.748 [2024-11-26 16:23:25.187757] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187762] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x204ea00): datao=0, datal=8192, cccid=5 00:17:59.748 [2024-11-26 16:23:25.187767] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2087e40) on tqpair(0x204ea00): expected_datao=0, payload_size=8192 00:17:59.748 [2024-11-26 16:23:25.187771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187788] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187792] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:59.748 [2024-11-26 16:23:25.187804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:59.748 [2024-11-26 16:23:25.187808] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187812] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x204ea00): datao=0, datal=512, cccid=4 00:17:59.748 [2024-11-26 16:23:25.187817] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2087cc0) on tqpair(0x204ea00): expected_datao=0, payload_size=512 00:17:59.748 [2024-11-26 16:23:25.187821] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187828] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187832] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:59.748 [2024-11-26 16:23:25.187843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:59.748 [2024-11-26 16:23:25.187847] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187851] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x204ea00): datao=0, datal=512, cccid=6 00:17:59.748 [2024-11-26 16:23:25.187856] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2087fc0) on tqpair(0x204ea00): expected_datao=0, payload_size=512 00:17:59.748 [2024-11-26 16:23:25.187860] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187867] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187871] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:59.748 [2024-11-26 16:23:25.187882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:59.748 [2024-11-26 16:23:25.187886] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187890] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x204ea00): datao=0, datal=4096, cccid=7 00:17:59.748 [2024-11-26 16:23:25.187895] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2088140) on tqpair(0x204ea00): expected_datao=0, payload_size=4096 00:17:59.748 [2024-11-26 16:23:25.187899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187906] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187910] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.748 [2024-11-26 16:23:25.187924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.748 [2024-11-26 16:23:25.187928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087e40) on tqpair=0x204ea00 00:17:59.748 [2024-11-26 16:23:25.187946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.748 [2024-11-26 16:23:25.187953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.748 [2024-11-26 16:23:25.187957] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.748 [2024-11-26 16:23:25.187961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087cc0) on tqpair=0x204ea00 00:17:59.748 ===================================================== 00:17:59.748 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:59.748 ===================================================== 00:17:59.748 Controller Capabilities/Features 00:17:59.748 ================================ 00:17:59.748 Vendor ID: 8086 00:17:59.748 Subsystem Vendor ID: 8086 00:17:59.748 Serial Number: SPDK00000000000001 00:17:59.748 Model Number: SPDK bdev Controller 00:17:59.748 Firmware Version: 25.01 00:17:59.748 Recommended Arb Burst: 6 00:17:59.748 IEEE OUI Identifier: e4 d2 5c 00:17:59.748 Multi-path I/O 00:17:59.748 May have multiple subsystem ports: Yes 00:17:59.748 May have multiple controllers: Yes 00:17:59.748 Associated with SR-IOV VF: No 00:17:59.748 Max Data Transfer Size: 131072 00:17:59.748 Max Number of Namespaces: 32 00:17:59.748 Max Number of I/O Queues: 127 00:17:59.748 NVMe Specification Version (VS): 1.3 00:17:59.748 NVMe Specification Version (Identify): 1.3 00:17:59.748 Maximum Queue Entries: 128 00:17:59.748 Contiguous Queues Required: Yes 00:17:59.748 Arbitration Mechanisms Supported 00:17:59.748 Weighted Round Robin: Not Supported 00:17:59.748 Vendor Specific: Not Supported 00:17:59.748 Reset Timeout: 15000 ms 00:17:59.748 Doorbell Stride: 4 bytes 00:17:59.748 NVM Subsystem Reset: Not Supported 00:17:59.748 Command Sets Supported 00:17:59.748 NVM Command Set: Supported 00:17:59.748 Boot Partition: Not Supported 00:17:59.748 Memory Page Size Minimum: 4096 bytes 00:17:59.748 Memory Page Size Maximum: 4096 bytes 00:17:59.749 Persistent Memory Region: Not Supported 00:17:59.749 Optional Asynchronous Events Supported 00:17:59.749 Namespace Attribute Notices: Supported 00:17:59.749 Firmware Activation Notices: Not Supported 00:17:59.749 ANA Change Notices: Not Supported 00:17:59.749 PLE Aggregate Log Change Notices: Not Supported 00:17:59.749 LBA Status Info Alert Notices: Not Supported 00:17:59.749 EGE Aggregate Log Change Notices: Not Supported 00:17:59.749 Normal NVM Subsystem Shutdown event: Not Supported 00:17:59.749 Zone Descriptor Change Notices: Not Supported 00:17:59.749 Discovery Log Change Notices: Not Supported 00:17:59.749 Controller Attributes 00:17:59.749 128-bit Host Identifier: Supported 00:17:59.749 Non-Operational Permissive Mode: Not Supported 00:17:59.749 NVM Sets: Not Supported 00:17:59.749 Read Recovery Levels: Not Supported 00:17:59.749 Endurance Groups: Not Supported 00:17:59.749 Predictable Latency Mode: Not Supported 00:17:59.749 Traffic Based Keep ALive: Not Supported 00:17:59.749 Namespace Granularity: Not Supported 00:17:59.749 SQ Associations: Not Supported 00:17:59.749 UUID List: Not Supported 00:17:59.749 Multi-Domain Subsystem: Not Supported 00:17:59.749 Fixed Capacity Management: Not Supported 00:17:59.749 Variable Capacity Management: Not Supported 00:17:59.749 Delete Endurance Group: Not Supported 00:17:59.749 Delete NVM Set: Not Supported 00:17:59.749 Extended LBA Formats Supported: Not Supported 00:17:59.749 Flexible Data Placement Supported: Not Supported 00:17:59.749 00:17:59.749 Controller Memory Buffer Support 00:17:59.749 ================================ 00:17:59.749 Supported: No 00:17:59.749 00:17:59.749 Persistent Memory Region Support 00:17:59.749 ================================ 00:17:59.749 Supported: No 00:17:59.749 00:17:59.749 Admin Command Set Attributes 00:17:59.749 ============================ 00:17:59.749 Security Send/Receive: Not Supported 00:17:59.749 Format NVM: Not Supported 00:17:59.749 Firmware Activate/Download: Not Supported 00:17:59.749 Namespace Management: Not Supported 00:17:59.749 Device Self-Test: Not Supported 00:17:59.749 Directives: Not Supported 00:17:59.749 NVMe-MI: Not Supported 00:17:59.749 Virtualization Management: Not Supported 00:17:59.749 Doorbell Buffer Config: Not Supported 00:17:59.749 Get LBA Status Capability: Not Supported 00:17:59.749 Command & Feature Lockdown Capability: Not Supported 00:17:59.749 Abort Command Limit: 4 00:17:59.749 Async Event Request Limit: 4 00:17:59.749 Number of Firmware Slots: N/A 00:17:59.749 Firmware Slot 1 Read-Only: N/A 00:17:59.749 Firmware Activation Without Reset: [2024-11-26 16:23:25.187973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.749 [2024-11-26 16:23:25.187980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.749 [2024-11-26 16:23:25.187984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.749 [2024-11-26 16:23:25.187988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087fc0) on tqpair=0x204ea00 00:17:59.749 [2024-11-26 16:23:25.187995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.749 [2024-11-26 16:23:25.188001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.749 [2024-11-26 16:23:25.188005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.749 [2024-11-26 16:23:25.188009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2088140) on tqpair=0x204ea00 00:17:59.749 N/A 00:17:59.749 Multiple Update Detection Support: N/A 00:17:59.749 Firmware Update Granularity: No Information Provided 00:17:59.749 Per-Namespace SMART Log: No 00:17:59.749 Asymmetric Namespace Access Log Page: Not Supported 00:17:59.749 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:59.749 Command Effects Log Page: Supported 00:17:59.749 Get Log Page Extended Data: Supported 00:17:59.749 Telemetry Log Pages: Not Supported 00:17:59.749 Persistent Event Log Pages: Not Supported 00:17:59.749 Supported Log Pages Log Page: May Support 00:17:59.749 Commands Supported & Effects Log Page: Not Supported 00:17:59.749 Feature Identifiers & Effects Log Page:May Support 00:17:59.749 NVMe-MI Commands & Effects Log Page: May Support 00:17:59.749 Data Area 4 for Telemetry Log: Not Supported 00:17:59.749 Error Log Page Entries Supported: 128 00:17:59.749 Keep Alive: Supported 00:17:59.749 Keep Alive Granularity: 10000 ms 00:17:59.749 00:17:59.749 NVM Command Set Attributes 00:17:59.749 ========================== 00:17:59.749 Submission Queue Entry Size 00:17:59.749 Max: 64 00:17:59.749 Min: 64 00:17:59.749 Completion Queue Entry Size 00:17:59.749 Max: 16 00:17:59.749 Min: 16 00:17:59.749 Number of Namespaces: 32 00:17:59.749 Compare Command: Supported 00:17:59.749 Write Uncorrectable Command: Not Supported 00:17:59.749 Dataset Management Command: Supported 00:17:59.749 Write Zeroes Command: Supported 00:17:59.749 Set Features Save Field: Not Supported 00:17:59.749 Reservations: Supported 00:17:59.749 Timestamp: Not Supported 00:17:59.749 Copy: Supported 00:17:59.749 Volatile Write Cache: Present 00:17:59.749 Atomic Write Unit (Normal): 1 00:17:59.749 Atomic Write Unit (PFail): 1 00:17:59.749 Atomic Compare & Write Unit: 1 00:17:59.749 Fused Compare & Write: Supported 00:17:59.749 Scatter-Gather List 00:17:59.749 SGL Command Set: Supported 00:17:59.749 SGL Keyed: Supported 00:17:59.749 SGL Bit Bucket Descriptor: Not Supported 00:17:59.749 SGL Metadata Pointer: Not Supported 00:17:59.749 Oversized SGL: Not Supported 00:17:59.749 SGL Metadata Address: Not Supported 00:17:59.749 SGL Offset: Supported 00:17:59.749 Transport SGL Data Block: Not Supported 00:17:59.749 Replay Protected Memory Block: Not Supported 00:17:59.749 00:17:59.749 Firmware Slot Information 00:17:59.749 ========================= 00:17:59.749 Active slot: 1 00:17:59.749 Slot 1 Firmware Revision: 25.01 00:17:59.749 00:17:59.749 00:17:59.749 Commands Supported and Effects 00:17:59.749 ============================== 00:17:59.749 Admin Commands 00:17:59.749 -------------- 00:17:59.749 Get Log Page (02h): Supported 00:17:59.749 Identify (06h): Supported 00:17:59.749 Abort (08h): Supported 00:17:59.749 Set Features (09h): Supported 00:17:59.749 Get Features (0Ah): Supported 00:17:59.749 Asynchronous Event Request (0Ch): Supported 00:17:59.749 Keep Alive (18h): Supported 00:17:59.749 I/O Commands 00:17:59.749 ------------ 00:17:59.749 Flush (00h): Supported LBA-Change 00:17:59.749 Write (01h): Supported LBA-Change 00:17:59.749 Read (02h): Supported 00:17:59.749 Compare (05h): Supported 00:17:59.749 Write Zeroes (08h): Supported LBA-Change 00:17:59.749 Dataset Management (09h): Supported LBA-Change 00:17:59.749 Copy (19h): Supported LBA-Change 00:17:59.749 00:17:59.749 Error Log 00:17:59.749 ========= 00:17:59.749 00:17:59.749 Arbitration 00:17:59.749 =========== 00:17:59.749 Arbitration Burst: 1 00:17:59.749 00:17:59.749 Power Management 00:17:59.749 ================ 00:17:59.749 Number of Power States: 1 00:17:59.749 Current Power State: Power State #0 00:17:59.749 Power State #0: 00:17:59.749 Max Power: 0.00 W 00:17:59.749 Non-Operational State: Operational 00:17:59.749 Entry Latency: Not Reported 00:17:59.749 Exit Latency: Not Reported 00:17:59.749 Relative Read Throughput: 0 00:17:59.749 Relative Read Latency: 0 00:17:59.749 Relative Write Throughput: 0 00:17:59.749 Relative Write Latency: 0 00:17:59.749 Idle Power: Not Reported 00:17:59.749 Active Power: Not Reported 00:17:59.749 Non-Operational Permissive Mode: Not Supported 00:17:59.749 00:17:59.749 Health Information 00:17:59.749 ================== 00:17:59.749 Critical Warnings: 00:17:59.749 Available Spare Space: OK 00:17:59.749 Temperature: OK 00:17:59.749 Device Reliability: OK 00:17:59.749 Read Only: No 00:17:59.749 Volatile Memory Backup: OK 00:17:59.749 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:59.749 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:59.749 Available Spare: 0% 00:17:59.749 Available Spare Threshold: 0% 00:17:59.749 Life Percentage Used:[2024-11-26 16:23:25.188109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.749 [2024-11-26 16:23:25.188117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x204ea00) 00:17:59.749 [2024-11-26 16:23:25.188125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.749 [2024-11-26 16:23:25.188147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2088140, cid 7, qid 0 00:17:59.749 [2024-11-26 16:23:25.188197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.749 [2024-11-26 16:23:25.188205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.749 [2024-11-26 16:23:25.188208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.749 [2024-11-26 16:23:25.188213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2088140) on tqpair=0x204ea00 00:17:59.749 [2024-11-26 16:23:25.188249] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:59.749 [2024-11-26 16:23:25.188260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20876c0) on tqpair=0x204ea00 00:17:59.749 [2024-11-26 16:23:25.188267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.750 [2024-11-26 16:23:25.188273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087840) on tqpair=0x204ea00 00:17:59.750 [2024-11-26 16:23:25.188278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.750 [2024-11-26 16:23:25.188284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20879c0) on tqpair=0x204ea00 00:17:59.750 [2024-11-26 16:23:25.188289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.750 [2024-11-26 16:23:25.188294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.750 [2024-11-26 16:23:25.188299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.750 [2024-11-26 16:23:25.188309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.188313] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.188318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.750 [2024-11-26 16:23:25.188326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.750 [2024-11-26 16:23:25.192353] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.750 [2024-11-26 16:23:25.192382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.750 [2024-11-26 16:23:25.192390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.750 [2024-11-26 16:23:25.192394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.192399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.750 [2024-11-26 16:23:25.192410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.192415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.192419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.750 [2024-11-26 16:23:25.192428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.750 [2024-11-26 16:23:25.192456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.750 [2024-11-26 16:23:25.192523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.750 [2024-11-26 16:23:25.192530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.750 [2024-11-26 16:23:25.192534] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.192538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.750 [2024-11-26 16:23:25.192544] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:59.750 [2024-11-26 16:23:25.192550] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:59.750 [2024-11-26 16:23:25.192560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.192565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.192569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.750 [2024-11-26 16:23:25.192577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.750 [2024-11-26 16:23:25.192595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.750 [2024-11-26 16:23:25.192642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.750 [2024-11-26 16:23:25.192649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.750 [2024-11-26 16:23:25.192653] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.192657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.750 [2024-11-26 16:23:25.192668] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.192673] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.192677] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.750 [2024-11-26 16:23:25.192684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.750 [2024-11-26 16:23:25.192727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.750 [2024-11-26 16:23:25.192779] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.750 [2024-11-26 16:23:25.192786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.750 [2024-11-26 16:23:25.192790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.192795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.750 [2024-11-26 16:23:25.192806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.192811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.192815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.750 [2024-11-26 16:23:25.192823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.750 [2024-11-26 16:23:25.192841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.750 [2024-11-26 16:23:25.192889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.750 [2024-11-26 16:23:25.192896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.750 [2024-11-26 16:23:25.192900] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.192904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.750 [2024-11-26 16:23:25.192915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.192920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.192924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.750 [2024-11-26 16:23:25.192932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.750 [2024-11-26 16:23:25.192949] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.750 [2024-11-26 16:23:25.192991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.750 [2024-11-26 16:23:25.192998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.750 [2024-11-26 16:23:25.193001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.193006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.750 [2024-11-26 16:23:25.193032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.193037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.193041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.750 [2024-11-26 16:23:25.193048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.750 [2024-11-26 16:23:25.193065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.750 [2024-11-26 16:23:25.193121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.750 [2024-11-26 16:23:25.193139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.750 [2024-11-26 16:23:25.193143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.193147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.750 [2024-11-26 16:23:25.193158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.193163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.193167] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.750 [2024-11-26 16:23:25.193175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.750 [2024-11-26 16:23:25.193192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.750 [2024-11-26 16:23:25.193237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.750 [2024-11-26 16:23:25.193244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.750 [2024-11-26 16:23:25.193248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.193252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.750 [2024-11-26 16:23:25.193263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.193268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.193272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.750 [2024-11-26 16:23:25.193280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.750 [2024-11-26 16:23:25.193298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.750 [2024-11-26 16:23:25.193340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.750 [2024-11-26 16:23:25.193347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.750 [2024-11-26 16:23:25.193351] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.750 [2024-11-26 16:23:25.193356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.750 [2024-11-26 16:23:25.193366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.751 [2024-11-26 16:23:25.193383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.751 [2024-11-26 16:23:25.193416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.751 [2024-11-26 16:23:25.193460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.751 [2024-11-26 16:23:25.193467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.751 [2024-11-26 16:23:25.193471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.751 [2024-11-26 16:23:25.193502] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.751 [2024-11-26 16:23:25.193518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.751 [2024-11-26 16:23:25.193536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.751 [2024-11-26 16:23:25.193585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.751 [2024-11-26 16:23:25.193602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.751 [2024-11-26 16:23:25.193607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.751 [2024-11-26 16:23:25.193622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.751 [2024-11-26 16:23:25.193639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.751 [2024-11-26 16:23:25.193658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.751 [2024-11-26 16:23:25.193702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.751 [2024-11-26 16:23:25.193713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.751 [2024-11-26 16:23:25.193717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.751 [2024-11-26 16:23:25.193733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.751 [2024-11-26 16:23:25.193749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.751 [2024-11-26 16:23:25.193767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.751 [2024-11-26 16:23:25.193811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.751 [2024-11-26 16:23:25.193818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.751 [2024-11-26 16:23:25.193822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.751 [2024-11-26 16:23:25.193837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.751 [2024-11-26 16:23:25.193853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.751 [2024-11-26 16:23:25.193871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.751 [2024-11-26 16:23:25.193914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.751 [2024-11-26 16:23:25.193921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.751 [2024-11-26 16:23:25.193925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.751 [2024-11-26 16:23:25.193939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.193948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.751 [2024-11-26 16:23:25.193956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.751 [2024-11-26 16:23:25.193973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.751 [2024-11-26 16:23:25.194017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.751 [2024-11-26 16:23:25.194028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.751 [2024-11-26 16:23:25.194033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.751 [2024-11-26 16:23:25.194048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.751 [2024-11-26 16:23:25.194065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.751 [2024-11-26 16:23:25.194083] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.751 [2024-11-26 16:23:25.194126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.751 [2024-11-26 16:23:25.194133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.751 [2024-11-26 16:23:25.194137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194141] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.751 [2024-11-26 16:23:25.194151] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194156] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.751 [2024-11-26 16:23:25.194168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.751 [2024-11-26 16:23:25.194185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.751 [2024-11-26 16:23:25.194226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.751 [2024-11-26 16:23:25.194233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.751 [2024-11-26 16:23:25.194237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.751 [2024-11-26 16:23:25.194251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.751 [2024-11-26 16:23:25.194267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.751 [2024-11-26 16:23:25.194285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.751 [2024-11-26 16:23:25.194331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.751 [2024-11-26 16:23:25.194354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.751 [2024-11-26 16:23:25.194359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.751 [2024-11-26 16:23:25.194375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194381] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.751 [2024-11-26 16:23:25.194392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.751 [2024-11-26 16:23:25.194413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.751 [2024-11-26 16:23:25.194457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.751 [2024-11-26 16:23:25.194464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.751 [2024-11-26 16:23:25.194468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.751 [2024-11-26 16:23:25.194483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.751 [2024-11-26 16:23:25.194499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.751 [2024-11-26 16:23:25.194517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.751 [2024-11-26 16:23:25.194557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.751 [2024-11-26 16:23:25.194564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.751 [2024-11-26 16:23:25.194568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.751 [2024-11-26 16:23:25.194583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.751 [2024-11-26 16:23:25.194599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.751 [2024-11-26 16:23:25.194617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.751 [2024-11-26 16:23:25.194663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.751 [2024-11-26 16:23:25.194670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.751 [2024-11-26 16:23:25.194674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.751 [2024-11-26 16:23:25.194678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.751 [2024-11-26 16:23:25.194688] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.194693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.194697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.752 [2024-11-26 16:23:25.194704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.752 [2024-11-26 16:23:25.194722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.752 [2024-11-26 16:23:25.194765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.752 [2024-11-26 16:23:25.194773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.752 [2024-11-26 16:23:25.194777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.194781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.752 [2024-11-26 16:23:25.194792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.194797] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.194801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.752 [2024-11-26 16:23:25.194808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.752 [2024-11-26 16:23:25.194826] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.752 [2024-11-26 16:23:25.194867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.752 [2024-11-26 16:23:25.194874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.752 [2024-11-26 16:23:25.194878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.194882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.752 [2024-11-26 16:23:25.194893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.194898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.194902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.752 [2024-11-26 16:23:25.194909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.752 [2024-11-26 16:23:25.194926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.752 [2024-11-26 16:23:25.194969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.752 [2024-11-26 16:23:25.194976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.752 [2024-11-26 16:23:25.194980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.194985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.752 [2024-11-26 16:23:25.194995] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.752 [2024-11-26 16:23:25.195012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.752 [2024-11-26 16:23:25.195029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.752 [2024-11-26 16:23:25.195069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.752 [2024-11-26 16:23:25.195081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.752 [2024-11-26 16:23:25.195085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.752 [2024-11-26 16:23:25.195101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.752 [2024-11-26 16:23:25.195117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.752 [2024-11-26 16:23:25.195135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.752 [2024-11-26 16:23:25.195182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.752 [2024-11-26 16:23:25.195189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.752 [2024-11-26 16:23:25.195193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.752 [2024-11-26 16:23:25.195207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.752 [2024-11-26 16:23:25.195224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.752 [2024-11-26 16:23:25.195241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.752 [2024-11-26 16:23:25.195287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.752 [2024-11-26 16:23:25.195294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.752 [2024-11-26 16:23:25.195298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.752 [2024-11-26 16:23:25.195312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.752 [2024-11-26 16:23:25.195329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.752 [2024-11-26 16:23:25.195377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.752 [2024-11-26 16:23:25.195419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.752 [2024-11-26 16:23:25.195426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.752 [2024-11-26 16:23:25.195430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.752 [2024-11-26 16:23:25.195446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.752 [2024-11-26 16:23:25.195463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.752 [2024-11-26 16:23:25.195483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.752 [2024-11-26 16:23:25.195528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.752 [2024-11-26 16:23:25.195535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.752 [2024-11-26 16:23:25.195539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.752 [2024-11-26 16:23:25.195554] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.752 [2024-11-26 16:23:25.195571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.752 [2024-11-26 16:23:25.195589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.752 [2024-11-26 16:23:25.195631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.752 [2024-11-26 16:23:25.195638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.752 [2024-11-26 16:23:25.195642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.752 [2024-11-26 16:23:25.195657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.752 [2024-11-26 16:23:25.195674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.752 [2024-11-26 16:23:25.195692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.752 [2024-11-26 16:23:25.195749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.752 [2024-11-26 16:23:25.195756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.752 [2024-11-26 16:23:25.195759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.752 [2024-11-26 16:23:25.195774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195783] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.752 [2024-11-26 16:23:25.195790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.752 [2024-11-26 16:23:25.195808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.752 [2024-11-26 16:23:25.195851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.752 [2024-11-26 16:23:25.195858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.752 [2024-11-26 16:23:25.195862] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.752 [2024-11-26 16:23:25.195877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.752 [2024-11-26 16:23:25.195893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.752 [2024-11-26 16:23:25.195910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.752 [2024-11-26 16:23:25.195951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.752 [2024-11-26 16:23:25.195958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.752 [2024-11-26 16:23:25.195961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.752 [2024-11-26 16:23:25.195966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.753 [2024-11-26 16:23:25.195977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.753 [2024-11-26 16:23:25.195981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.753 [2024-11-26 16:23:25.195985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.753 [2024-11-26 16:23:25.195993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-11-26 16:23:25.196010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.753 [2024-11-26 16:23:25.196059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.753 [2024-11-26 16:23:25.196066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.753 [2024-11-26 16:23:25.196070] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.753 [2024-11-26 16:23:25.196074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.753 [2024-11-26 16:23:25.196085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.753 [2024-11-26 16:23:25.196090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.753 [2024-11-26 16:23:25.196094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.753 [2024-11-26 16:23:25.196101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-11-26 16:23:25.196118] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.753 [2024-11-26 16:23:25.196162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.753 [2024-11-26 16:23:25.196173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.753 [2024-11-26 16:23:25.196178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.753 [2024-11-26 16:23:25.196182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.753 [2024-11-26 16:23:25.196194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.753 [2024-11-26 16:23:25.196199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.753 [2024-11-26 16:23:25.196203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.753 [2024-11-26 16:23:25.196210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-11-26 16:23:25.196228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.753 [2024-11-26 16:23:25.196272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.753 [2024-11-26 16:23:25.196279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.753 [2024-11-26 16:23:25.196283] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.753 [2024-11-26 16:23:25.196287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.753 [2024-11-26 16:23:25.196298] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.753 [2024-11-26 16:23:25.196303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.753 [2024-11-26 16:23:25.196307] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.753 [2024-11-26 16:23:25.196315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-11-26 16:23:25.196332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.753 [2024-11-26 16:23:25.200362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.753 [2024-11-26 16:23:25.200380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.753 [2024-11-26 16:23:25.200401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.753 [2024-11-26 16:23:25.200406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.753 [2024-11-26 16:23:25.200419] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:59.753 [2024-11-26 16:23:25.200424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:59.753 [2024-11-26 16:23:25.200428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x204ea00) 00:17:59.753 [2024-11-26 16:23:25.200437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-11-26 16:23:25.200461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2087b40, cid 3, qid 0 00:17:59.753 [2024-11-26 16:23:25.200522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:59.753 [2024-11-26 16:23:25.200529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:59.753 [2024-11-26 16:23:25.200533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:59.753 [2024-11-26 16:23:25.200537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2087b40) on tqpair=0x204ea00 00:17:59.753 [2024-11-26 16:23:25.200545] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:17:59.753 0% 00:17:59.753 Data Units Read: 0 00:17:59.753 Data Units Written: 0 00:17:59.753 Host Read Commands: 0 00:17:59.753 Host Write Commands: 0 00:17:59.753 Controller Busy Time: 0 minutes 00:17:59.753 Power Cycles: 0 00:17:59.753 Power On Hours: 0 hours 00:17:59.753 Unsafe Shutdowns: 0 00:17:59.753 Unrecoverable Media Errors: 0 00:17:59.753 Lifetime Error Log Entries: 0 00:17:59.753 Warning Temperature Time: 0 minutes 00:17:59.753 Critical Temperature Time: 0 minutes 00:17:59.753 00:17:59.753 Number of Queues 00:17:59.753 ================ 00:17:59.753 Number of I/O Submission Queues: 127 00:17:59.753 Number of I/O Completion Queues: 127 00:17:59.753 00:17:59.753 Active Namespaces 00:17:59.753 ================= 00:17:59.753 Namespace ID:1 00:17:59.753 Error Recovery Timeout: Unlimited 00:17:59.753 Command Set Identifier: NVM (00h) 00:17:59.753 Deallocate: Supported 00:17:59.753 Deallocated/Unwritten Error: Not Supported 00:17:59.753 Deallocated Read Value: Unknown 00:17:59.753 Deallocate in Write Zeroes: Not Supported 00:17:59.753 Deallocated Guard Field: 0xFFFF 00:17:59.753 Flush: Supported 00:17:59.753 Reservation: Supported 00:17:59.753 Namespace Sharing Capabilities: Multiple Controllers 00:17:59.753 Size (in LBAs): 131072 (0GiB) 00:17:59.753 Capacity (in LBAs): 131072 (0GiB) 00:17:59.753 Utilization (in LBAs): 131072 (0GiB) 00:17:59.753 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:59.753 EUI64: ABCDEF0123456789 00:17:59.753 UUID: 773e2767-71e6-4eb8-800c-5e75e5e4f14f 00:17:59.753 Thin Provisioning: Not Supported 00:17:59.753 Per-NS Atomic Units: Yes 00:17:59.753 Atomic Boundary Size (Normal): 0 00:17:59.753 Atomic Boundary Size (PFail): 0 00:17:59.753 Atomic Boundary Offset: 0 00:17:59.753 Maximum Single Source Range Length: 65535 00:17:59.753 Maximum Copy Length: 65535 00:17:59.753 Maximum Source Range Count: 1 00:17:59.753 NGUID/EUI64 Never Reused: No 00:17:59.753 Namespace Write Protected: No 00:17:59.753 Number of LBA Formats: 1 00:17:59.753 Current LBA Format: LBA Format #00 00:17:59.753 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:59.753 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.753 rmmod nvme_tcp 00:17:59.753 rmmod nvme_fabrics 00:17:59.753 rmmod nvme_keyring 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 88147 ']' 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 88147 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 88147 ']' 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 88147 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88147 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.753 killing process with pid 88147 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88147' 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 88147 00:17:59.753 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 88147 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:00.013 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:18:00.271 00:18:00.271 real 0m2.103s 00:18:00.271 user 0m4.235s 00:18:00.271 sys 0m0.673s 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:00.271 ************************************ 00:18:00.271 END TEST nvmf_identify 00:18:00.271 ************************************ 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.271 ************************************ 00:18:00.271 START TEST nvmf_perf 00:18:00.271 ************************************ 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:00.271 * Looking for test storage... 00:18:00.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:00.271 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.530 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:00.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.531 --rc genhtml_branch_coverage=1 00:18:00.531 --rc genhtml_function_coverage=1 00:18:00.531 --rc genhtml_legend=1 00:18:00.531 --rc geninfo_all_blocks=1 00:18:00.531 --rc geninfo_unexecuted_blocks=1 00:18:00.531 00:18:00.531 ' 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:00.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.531 --rc genhtml_branch_coverage=1 00:18:00.531 --rc genhtml_function_coverage=1 00:18:00.531 --rc genhtml_legend=1 00:18:00.531 --rc geninfo_all_blocks=1 00:18:00.531 --rc geninfo_unexecuted_blocks=1 00:18:00.531 00:18:00.531 ' 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:00.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.531 --rc genhtml_branch_coverage=1 00:18:00.531 --rc genhtml_function_coverage=1 00:18:00.531 --rc genhtml_legend=1 00:18:00.531 --rc geninfo_all_blocks=1 00:18:00.531 --rc geninfo_unexecuted_blocks=1 00:18:00.531 00:18:00.531 ' 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:00.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.531 --rc genhtml_branch_coverage=1 00:18:00.531 --rc genhtml_function_coverage=1 00:18:00.531 --rc genhtml_legend=1 00:18:00.531 --rc geninfo_all_blocks=1 00:18:00.531 --rc geninfo_unexecuted_blocks=1 00:18:00.531 00:18:00.531 ' 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.531 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.531 16:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:00.531 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:00.532 Cannot find device "nvmf_init_br" 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:00.532 Cannot find device "nvmf_init_br2" 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:00.532 Cannot find device "nvmf_tgt_br" 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.532 Cannot find device "nvmf_tgt_br2" 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:00.532 Cannot find device "nvmf_init_br" 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:00.532 Cannot find device "nvmf_init_br2" 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:00.532 Cannot find device "nvmf_tgt_br" 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:00.532 Cannot find device "nvmf_tgt_br2" 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:00.532 Cannot find device "nvmf_br" 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:00.532 Cannot find device "nvmf_init_if" 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:00.532 Cannot find device "nvmf_init_if2" 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:00.532 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:00.791 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:00.791 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:18:00.791 00:18:00.791 --- 10.0.0.3 ping statistics --- 00:18:00.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.791 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:00.791 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:00.791 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:18:00.791 00:18:00.791 --- 10.0.0.4 ping statistics --- 00:18:00.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.791 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:00.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:18:00.791 00:18:00.791 --- 10.0.0.1 ping statistics --- 00:18:00.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.791 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:00.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:18:00.791 00:18:00.791 --- 10.0.0.2 ping statistics --- 00:18:00.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.791 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=88397 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 88397 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 88397 ']' 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:00.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.791 16:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:01.051 [2024-11-26 16:23:26.481638] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:18:01.051 [2024-11-26 16:23:26.481732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.051 [2024-11-26 16:23:26.629796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:01.051 [2024-11-26 16:23:26.649643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.051 [2024-11-26 16:23:26.649685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.051 [2024-11-26 16:23:26.649694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.051 [2024-11-26 16:23:26.649701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.051 [2024-11-26 16:23:26.649707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.051 [2024-11-26 16:23:26.650295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.051 [2024-11-26 16:23:26.651008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.051 [2024-11-26 16:23:26.651216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:01.051 [2024-11-26 16:23:26.651223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.051 [2024-11-26 16:23:26.681852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:01.987 16:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.987 16:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:18:01.987 16:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:01.987 16:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.987 16:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:01.987 16:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.987 16:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:01.988 16:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:18:02.555 16:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:02.555 16:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:18:02.815 16:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:18:02.815 16:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:03.073 16:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:03.073 16:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:18:03.073 16:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:03.073 16:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:18:03.073 16:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:03.332 [2024-11-26 16:23:28.797804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.332 16:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:03.591 16:23:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:03.591 16:23:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:03.849 16:23:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:03.849 16:23:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:04.108 16:23:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:04.368 [2024-11-26 16:23:29.839157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:04.368 16:23:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:04.626 16:23:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:04.627 16:23:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:04.627 16:23:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:04.627 16:23:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:06.028 Initializing NVMe Controllers 00:18:06.028 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:06.028 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:06.028 Initialization complete. Launching workers. 00:18:06.028 ======================================================== 00:18:06.028 Latency(us) 00:18:06.028 Device Information : IOPS MiB/s Average min max 00:18:06.028 PCIE (0000:00:10.0) NSID 1 from core 0: 22438.33 87.65 1426.29 417.40 7755.54 00:18:06.028 ======================================================== 00:18:06.028 Total : 22438.33 87.65 1426.29 417.40 7755.54 00:18:06.028 00:18:06.028 16:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:06.965 Initializing NVMe Controllers 00:18:06.965 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:06.965 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:06.965 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:06.965 Initialization complete. Launching workers. 00:18:06.965 ======================================================== 00:18:06.965 Latency(us) 00:18:06.965 Device Information : IOPS MiB/s Average min max 00:18:06.965 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3978.99 15.54 250.97 96.56 4235.42 00:18:06.965 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.50 0.49 8031.27 5005.04 12014.94 00:18:06.965 ======================================================== 00:18:06.965 Total : 4104.49 16.03 488.85 96.56 12014.94 00:18:06.965 00:18:07.224 16:23:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:08.603 Initializing NVMe Controllers 00:18:08.603 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:08.603 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:08.603 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:08.603 Initialization complete. Launching workers. 00:18:08.603 ======================================================== 00:18:08.603 Latency(us) 00:18:08.603 Device Information : IOPS MiB/s Average min max 00:18:08.603 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8603.90 33.61 3719.55 688.40 7647.14 00:18:08.603 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4011.02 15.67 8001.21 5793.20 11490.45 00:18:08.603 ======================================================== 00:18:08.603 Total : 12614.92 49.28 5080.94 688.40 11490.45 00:18:08.603 00:18:08.603 16:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:18:08.603 16:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:11.141 Initializing NVMe Controllers 00:18:11.141 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:11.141 Controller IO queue size 128, less than required. 00:18:11.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:11.141 Controller IO queue size 128, less than required. 00:18:11.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:11.141 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:11.141 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:11.141 Initialization complete. Launching workers. 00:18:11.141 ======================================================== 00:18:11.141 Latency(us) 00:18:11.141 Device Information : IOPS MiB/s Average min max 00:18:11.141 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2056.48 514.12 62676.59 33166.04 86327.42 00:18:11.141 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 685.49 171.37 192301.85 52466.38 287706.81 00:18:11.141 ======================================================== 00:18:11.141 Total : 2741.97 685.49 95082.91 33166.04 287706.81 00:18:11.141 00:18:11.141 16:23:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:18:11.401 Initializing NVMe Controllers 00:18:11.401 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:11.401 Controller IO queue size 128, less than required. 00:18:11.401 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:11.401 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:11.401 Controller IO queue size 128, less than required. 00:18:11.401 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:11.401 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:11.401 WARNING: Some requested NVMe devices were skipped 00:18:11.401 No valid NVMe controllers or AIO or URING devices found 00:18:11.401 16:23:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:18:13.935 Initializing NVMe Controllers 00:18:13.935 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:13.935 Controller IO queue size 128, less than required. 00:18:13.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:13.935 Controller IO queue size 128, less than required. 00:18:13.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:13.935 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:13.935 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:13.935 Initialization complete. Launching workers. 00:18:13.935 00:18:13.935 ==================== 00:18:13.935 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:13.935 TCP transport: 00:18:13.935 polls: 9690 00:18:13.935 idle_polls: 6246 00:18:13.935 sock_completions: 3444 00:18:13.935 nvme_completions: 6481 00:18:13.935 submitted_requests: 9866 00:18:13.935 queued_requests: 1 00:18:13.935 00:18:13.935 ==================== 00:18:13.935 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:13.935 TCP transport: 00:18:13.935 polls: 9985 00:18:13.935 idle_polls: 5968 00:18:13.935 sock_completions: 4017 00:18:13.935 nvme_completions: 6637 00:18:13.935 submitted_requests: 10016 00:18:13.935 queued_requests: 1 00:18:13.935 ======================================================== 00:18:13.935 Latency(us) 00:18:13.935 Device Information : IOPS MiB/s Average min max 00:18:13.935 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1619.66 404.91 80416.54 37024.79 139283.61 00:18:13.935 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1658.65 414.66 78030.98 32076.20 126004.17 00:18:13.935 ======================================================== 00:18:13.935 Total : 3278.30 819.58 79209.57 32076.20 139283.61 00:18:13.935 00:18:13.935 16:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:13.935 16:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.194 16:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:18:14.194 16:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:18:14.194 16:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:18:14.452 16:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=d1154b3f-20f4-4867-b64e-b3060264dacd 00:18:14.452 16:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb d1154b3f-20f4-4867-b64e-b3060264dacd 00:18:14.452 16:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=d1154b3f-20f4-4867-b64e-b3060264dacd 00:18:14.452 16:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:18:14.452 16:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:18:14.452 16:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:18:14.452 16:23:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:14.711 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:18:14.711 { 00:18:14.711 "uuid": "d1154b3f-20f4-4867-b64e-b3060264dacd", 00:18:14.711 "name": "lvs_0", 00:18:14.711 "base_bdev": "Nvme0n1", 00:18:14.711 "total_data_clusters": 1278, 00:18:14.711 "free_clusters": 1278, 00:18:14.711 "block_size": 4096, 00:18:14.711 "cluster_size": 4194304 00:18:14.711 } 00:18:14.711 ]' 00:18:14.711 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="d1154b3f-20f4-4867-b64e-b3060264dacd") .free_clusters' 00:18:14.711 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278 00:18:14.711 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="d1154b3f-20f4-4867-b64e-b3060264dacd") .cluster_size' 00:18:14.711 5112 00:18:14.711 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:18:14.711 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112 00:18:14.711 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112 00:18:14.711 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:18:14.711 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d1154b3f-20f4-4867-b64e-b3060264dacd lbd_0 5112 00:18:14.970 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=b53a3bd4-92d0-4502-8c7d-571ab0e5a47d 00:18:14.970 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore b53a3bd4-92d0-4502-8c7d-571ab0e5a47d lvs_n_0 00:18:15.538 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=2a6a7daf-11d1-4aa7-a3b8-128c1adeea6a 00:18:15.538 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 2a6a7daf-11d1-4aa7-a3b8-128c1adeea6a 00:18:15.538 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=2a6a7daf-11d1-4aa7-a3b8-128c1adeea6a 00:18:15.538 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:18:15.538 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:18:15.538 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:18:15.538 16:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:15.797 16:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:18:15.797 { 00:18:15.797 "uuid": "d1154b3f-20f4-4867-b64e-b3060264dacd", 00:18:15.797 "name": "lvs_0", 00:18:15.797 "base_bdev": "Nvme0n1", 00:18:15.797 "total_data_clusters": 1278, 00:18:15.797 "free_clusters": 0, 00:18:15.797 "block_size": 4096, 00:18:15.797 "cluster_size": 4194304 00:18:15.797 }, 00:18:15.797 { 00:18:15.797 "uuid": "2a6a7daf-11d1-4aa7-a3b8-128c1adeea6a", 00:18:15.797 "name": "lvs_n_0", 00:18:15.797 "base_bdev": "b53a3bd4-92d0-4502-8c7d-571ab0e5a47d", 00:18:15.798 "total_data_clusters": 1276, 00:18:15.798 "free_clusters": 1276, 00:18:15.798 "block_size": 4096, 00:18:15.798 "cluster_size": 4194304 00:18:15.798 } 00:18:15.798 ]' 00:18:15.798 16:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="2a6a7daf-11d1-4aa7-a3b8-128c1adeea6a") .free_clusters' 00:18:15.798 16:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276 00:18:15.798 16:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="2a6a7daf-11d1-4aa7-a3b8-128c1adeea6a") .cluster_size' 00:18:15.798 5104 00:18:15.798 16:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:18:15.798 16:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104 00:18:15.798 16:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104 00:18:15.798 16:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:18:15.798 16:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2a6a7daf-11d1-4aa7-a3b8-128c1adeea6a lbd_nest_0 5104 00:18:16.057 16:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=6fe6c5ff-cb75-47e6-93fa-3bbefe32aefb 00:18:16.057 16:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:16.316 16:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:18:16.316 16:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 6fe6c5ff-cb75-47e6-93fa-3bbefe32aefb 00:18:16.576 16:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:16.836 16:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:18:16.836 16:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:18:16.836 16:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:16.836 16:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:16.836 16:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:17.095 Initializing NVMe Controllers 00:18:17.095 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:17.095 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:17.095 WARNING: Some requested NVMe devices were skipped 00:18:17.095 No valid NVMe controllers or AIO or URING devices found 00:18:17.095 16:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:17.095 16:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:29.336 Initializing NVMe Controllers 00:18:29.336 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:29.336 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:29.336 Initialization complete. Launching workers. 00:18:29.336 ======================================================== 00:18:29.336 Latency(us) 00:18:29.336 Device Information : IOPS MiB/s Average min max 00:18:29.336 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 968.59 121.07 1032.01 327.64 8418.78 00:18:29.336 ======================================================== 00:18:29.336 Total : 968.59 121.07 1032.01 327.64 8418.78 00:18:29.336 00:18:29.336 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:29.336 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:29.336 16:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:29.336 Initializing NVMe Controllers 00:18:29.336 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:29.336 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:29.336 WARNING: Some requested NVMe devices were skipped 00:18:29.336 No valid NVMe controllers or AIO or URING devices found 00:18:29.336 16:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:29.336 16:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:39.316 Initializing NVMe Controllers 00:18:39.316 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:39.316 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:39.316 Initialization complete. Launching workers. 00:18:39.316 ======================================================== 00:18:39.316 Latency(us) 00:18:39.316 Device Information : IOPS MiB/s Average min max 00:18:39.316 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1354.05 169.26 23633.34 5247.94 58513.04 00:18:39.316 ======================================================== 00:18:39.316 Total : 1354.05 169.26 23633.34 5247.94 58513.04 00:18:39.316 00:18:39.316 16:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:39.316 16:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:39.316 16:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:39.316 Initializing NVMe Controllers 00:18:39.316 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:39.316 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:39.316 WARNING: Some requested NVMe devices were skipped 00:18:39.316 No valid NVMe controllers or AIO or URING devices found 00:18:39.316 16:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:39.316 16:24:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:49.293 Initializing NVMe Controllers 00:18:49.293 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:49.293 Controller IO queue size 128, less than required. 00:18:49.293 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:49.293 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:49.293 Initialization complete. Launching workers. 00:18:49.293 ======================================================== 00:18:49.293 Latency(us) 00:18:49.293 Device Information : IOPS MiB/s Average min max 00:18:49.293 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4077.10 509.64 31410.67 10034.21 67085.47 00:18:49.294 ======================================================== 00:18:49.294 Total : 4077.10 509.64 31410.67 10034.21 67085.47 00:18:49.294 00:18:49.294 16:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:49.294 16:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6fe6c5ff-cb75-47e6-93fa-3bbefe32aefb 00:18:49.294 16:24:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:49.552 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b53a3bd4-92d0-4502-8c7d-571ab0e5a47d 00:18:49.810 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:50.068 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:50.068 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:50.068 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:50.068 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:50.326 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:50.326 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:50.326 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:50.326 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:50.326 rmmod nvme_tcp 00:18:50.326 rmmod nvme_fabrics 00:18:50.326 rmmod nvme_keyring 00:18:50.326 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:50.326 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:50.326 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:50.326 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 88397 ']' 00:18:50.327 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 88397 00:18:50.327 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 88397 ']' 00:18:50.327 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 88397 00:18:50.327 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:18:50.327 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.327 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88397 00:18:50.327 killing process with pid 88397 00:18:50.327 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:50.327 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:50.327 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88397' 00:18:50.327 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 88397 00:18:50.327 16:24:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 88397 00:18:51.702 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:51.702 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:51.702 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:51.702 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:18:51.702 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:18:51.702 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:51.703 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:18:51.703 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:51.703 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:51.703 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:51.703 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:51.703 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:51.703 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:51.703 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:51.703 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:51.703 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:51.703 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:51.703 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:51.703 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:51.703 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:18:51.961 00:18:51.961 real 0m51.638s 00:18:51.961 user 3m15.367s 00:18:51.961 sys 0m11.840s 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.961 ************************************ 00:18:51.961 END TEST nvmf_perf 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:51.961 ************************************ 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.961 ************************************ 00:18:51.961 START TEST nvmf_fio_host 00:18:51.961 ************************************ 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:51.961 * Looking for test storage... 00:18:51.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:51.961 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:18:52.220 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:52.220 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:52.220 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:52.220 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:52.220 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:52.220 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:52.220 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:52.220 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:52.220 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:52.220 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:52.220 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:52.220 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:52.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.221 --rc genhtml_branch_coverage=1 00:18:52.221 --rc genhtml_function_coverage=1 00:18:52.221 --rc genhtml_legend=1 00:18:52.221 --rc geninfo_all_blocks=1 00:18:52.221 --rc geninfo_unexecuted_blocks=1 00:18:52.221 00:18:52.221 ' 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:52.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.221 --rc genhtml_branch_coverage=1 00:18:52.221 --rc genhtml_function_coverage=1 00:18:52.221 --rc genhtml_legend=1 00:18:52.221 --rc geninfo_all_blocks=1 00:18:52.221 --rc geninfo_unexecuted_blocks=1 00:18:52.221 00:18:52.221 ' 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:52.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.221 --rc genhtml_branch_coverage=1 00:18:52.221 --rc genhtml_function_coverage=1 00:18:52.221 --rc genhtml_legend=1 00:18:52.221 --rc geninfo_all_blocks=1 00:18:52.221 --rc geninfo_unexecuted_blocks=1 00:18:52.221 00:18:52.221 ' 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:52.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.221 --rc genhtml_branch_coverage=1 00:18:52.221 --rc genhtml_function_coverage=1 00:18:52.221 --rc genhtml_legend=1 00:18:52.221 --rc geninfo_all_blocks=1 00:18:52.221 --rc geninfo_unexecuted_blocks=1 00:18:52.221 00:18:52.221 ' 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:52.221 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:52.222 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:52.222 Cannot find device "nvmf_init_br" 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:52.222 Cannot find device "nvmf_init_br2" 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:52.222 Cannot find device "nvmf_tgt_br" 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:52.222 Cannot find device "nvmf_tgt_br2" 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:52.222 Cannot find device "nvmf_init_br" 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:52.222 Cannot find device "nvmf_init_br2" 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:52.222 Cannot find device "nvmf_tgt_br" 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:52.222 Cannot find device "nvmf_tgt_br2" 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:52.222 Cannot find device "nvmf_br" 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:52.222 Cannot find device "nvmf_init_if" 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:52.222 Cannot find device "nvmf_init_if2" 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:52.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:52.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:52.222 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:52.482 16:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:52.482 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:52.482 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:18:52.482 00:18:52.482 --- 10.0.0.3 ping statistics --- 00:18:52.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.482 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:52.482 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:52.482 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:18:52.482 00:18:52.482 --- 10.0.0.4 ping statistics --- 00:18:52.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.482 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:52.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:18:52.482 00:18:52.482 --- 10.0.0.1 ping statistics --- 00:18:52.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.482 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:52.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:18:52.482 00:18:52.482 --- 10.0.0.2 ping statistics --- 00:18:52.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.482 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=89276 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 89276 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 89276 ']' 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.482 16:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.741 [2024-11-26 16:24:18.151640] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:18:52.741 [2024-11-26 16:24:18.151754] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.741 [2024-11-26 16:24:18.305987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:52.741 [2024-11-26 16:24:18.330422] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.741 [2024-11-26 16:24:18.330676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.741 [2024-11-26 16:24:18.330858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.741 [2024-11-26 16:24:18.330873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.741 [2024-11-26 16:24:18.330883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.741 [2024-11-26 16:24:18.331908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.741 [2024-11-26 16:24:18.332029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.741 [2024-11-26 16:24:18.332558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:52.741 [2024-11-26 16:24:18.332566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.741 [2024-11-26 16:24:18.366139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:53.685 16:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.685 16:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:18:53.685 16:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:53.946 [2024-11-26 16:24:19.426474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.946 16:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:53.946 16:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.946 16:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.946 16:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:54.204 Malloc1 00:18:54.204 16:24:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:54.461 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:54.719 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:54.977 [2024-11-26 16:24:20.517012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:54.977 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:55.236 16:24:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:55.494 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:55.494 fio-3.35 00:18:55.494 Starting 1 thread 00:18:58.027 00:18:58.027 test: (groupid=0, jobs=1): err= 0: pid=89359: Tue Nov 26 16:24:23 2024 00:18:58.027 read: IOPS=9660, BW=37.7MiB/s (39.6MB/s)(75.7MiB/2006msec) 00:18:58.027 slat (nsec): min=1818, max=308935, avg=2217.92, stdev=3066.74 00:18:58.027 clat (usec): min=2158, max=12383, avg=6910.56, stdev=567.66 00:18:58.027 lat (usec): min=2184, max=12386, avg=6912.77, stdev=567.45 00:18:58.027 clat percentiles (usec): 00:18:58.027 | 1.00th=[ 5866], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6456], 00:18:58.027 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:18:58.027 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7635], 95.00th=[ 7898], 00:18:58.027 | 99.00th=[ 8455], 99.50th=[ 8848], 99.90th=[ 9634], 99.95th=[11731], 00:18:58.027 | 99.99th=[12387] 00:18:58.027 bw ( KiB/s): min=37744, max=39824, per=99.92%, avg=38610.75, stdev=872.86, samples=4 00:18:58.027 iops : min= 9436, max= 9956, avg=9652.50, stdev=218.27, samples=4 00:18:58.027 write: IOPS=9667, BW=37.8MiB/s (39.6MB/s)(75.8MiB/2006msec); 0 zone resets 00:18:58.027 slat (nsec): min=1879, max=104093, avg=2294.50, stdev=1344.71 00:18:58.027 clat (usec): min=2073, max=11917, avg=6292.85, stdev=515.81 00:18:58.027 lat (usec): min=2086, max=11919, avg=6295.14, stdev=515.68 00:18:58.027 clat percentiles (usec): 00:18:58.027 | 1.00th=[ 5342], 5.00th=[ 5604], 10.00th=[ 5735], 20.00th=[ 5932], 00:18:58.027 | 30.00th=[ 6063], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6325], 00:18:58.027 | 70.00th=[ 6456], 80.00th=[ 6652], 90.00th=[ 6915], 95.00th=[ 7177], 00:18:58.027 | 99.00th=[ 7701], 99.50th=[ 7963], 99.90th=[ 9503], 99.95th=[10683], 00:18:58.027 | 99.99th=[11863] 00:18:58.027 bw ( KiB/s): min=37888, max=39424, per=99.92%, avg=38638.50, stdev=669.16, samples=4 00:18:58.027 iops : min= 9472, max= 9856, avg=9659.50, stdev=167.22, samples=4 00:18:58.027 lat (msec) : 4=0.18%, 10=99.75%, 20=0.07% 00:18:58.027 cpu : usr=72.92%, sys=20.45%, ctx=14, majf=0, minf=7 00:18:58.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:58.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:58.027 issued rwts: total=19378,19393,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:58.027 00:18:58.027 Run status group 0 (all jobs): 00:18:58.027 READ: bw=37.7MiB/s (39.6MB/s), 37.7MiB/s-37.7MiB/s (39.6MB/s-39.6MB/s), io=75.7MiB (79.4MB), run=2006-2006msec 00:18:58.027 WRITE: bw=37.8MiB/s (39.6MB/s), 37.8MiB/s-37.8MiB/s (39.6MB/s-39.6MB/s), io=75.8MiB (79.4MB), run=2006-2006msec 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:58.027 16:24:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:58.027 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:58.027 fio-3.35 00:18:58.027 Starting 1 thread 00:19:00.559 00:19:00.559 test: (groupid=0, jobs=1): err= 0: pid=89402: Tue Nov 26 16:24:25 2024 00:19:00.559 read: IOPS=8891, BW=139MiB/s (146MB/s)(279MiB/2006msec) 00:19:00.560 slat (usec): min=2, max=134, avg= 3.62, stdev= 2.33 00:19:00.560 clat (usec): min=1656, max=16602, avg=8022.25, stdev=2462.22 00:19:00.560 lat (usec): min=1659, max=16605, avg=8025.87, stdev=2462.28 00:19:00.560 clat percentiles (usec): 00:19:00.560 | 1.00th=[ 3785], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 5800], 00:19:00.560 | 30.00th=[ 6456], 40.00th=[ 7111], 50.00th=[ 7832], 60.00th=[ 8455], 00:19:00.560 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[11207], 95.00th=[12518], 00:19:00.560 | 99.00th=[14746], 99.50th=[15533], 99.90th=[16188], 99.95th=[16450], 00:19:00.560 | 99.99th=[16581] 00:19:00.560 bw ( KiB/s): min=64000, max=77632, per=49.60%, avg=70568.00, stdev=5577.03, samples=4 00:19:00.560 iops : min= 4000, max= 4852, avg=4410.50, stdev=348.56, samples=4 00:19:00.560 write: IOPS=5119, BW=80.0MiB/s (83.9MB/s)(144MiB/1796msec); 0 zone resets 00:19:00.560 slat (usec): min=32, max=359, avg=37.24, stdev= 9.05 00:19:00.560 clat (usec): min=3619, max=19977, avg=11565.46, stdev=2065.91 00:19:00.560 lat (usec): min=3652, max=20009, avg=11602.71, stdev=2066.65 00:19:00.560 clat percentiles (usec): 00:19:00.560 | 1.00th=[ 7242], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:19:00.560 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:19:00.560 | 70.00th=[12387], 80.00th=[13304], 90.00th=[14484], 95.00th=[15270], 00:19:00.560 | 99.00th=[16909], 99.50th=[17433], 99.90th=[19530], 99.95th=[19792], 00:19:00.560 | 99.99th=[20055] 00:19:00.560 bw ( KiB/s): min=66560, max=80256, per=89.80%, avg=73560.00, stdev=5602.19, samples=4 00:19:00.560 iops : min= 4160, max= 5016, avg=4597.50, stdev=350.14, samples=4 00:19:00.560 lat (msec) : 2=0.03%, 4=1.11%, 10=58.80%, 20=40.06% 00:19:00.560 cpu : usr=82.64%, sys=13.12%, ctx=6, majf=0, minf=3 00:19:00.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:00.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:00.560 issued rwts: total=17836,9195,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:00.560 00:19:00.560 Run status group 0 (all jobs): 00:19:00.560 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=279MiB (292MB), run=2006-2006msec 00:19:00.560 WRITE: bw=80.0MiB/s (83.9MB/s), 80.0MiB/s-80.0MiB/s (83.9MB/s-83.9MB/s), io=144MiB (151MB), run=1796-1796msec 00:19:00.560 16:24:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:00.560 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:19:00.560 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:19:00.560 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:19:00.560 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:19:00.560 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:19:00.560 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:00.560 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:19:00.560 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:00.560 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:19:00.560 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:00.560 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:19:01.126 Nvme0n1 00:19:01.126 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:19:01.126 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=cecf105e-f60b-4c9f-a454-481f692ef8ec 00:19:01.126 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb cecf105e-f60b-4c9f-a454-481f692ef8ec 00:19:01.126 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=cecf105e-f60b-4c9f-a454-481f692ef8ec 00:19:01.126 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:19:01.126 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:19:01.126 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:19:01.126 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:01.384 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:19:01.384 { 00:19:01.384 "uuid": "cecf105e-f60b-4c9f-a454-481f692ef8ec", 00:19:01.384 "name": "lvs_0", 00:19:01.384 "base_bdev": "Nvme0n1", 00:19:01.384 "total_data_clusters": 4, 00:19:01.384 "free_clusters": 4, 00:19:01.384 "block_size": 4096, 00:19:01.384 "cluster_size": 1073741824 00:19:01.384 } 00:19:01.384 ]' 00:19:01.384 16:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="cecf105e-f60b-4c9f-a454-481f692ef8ec") .free_clusters' 00:19:01.384 16:24:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4 00:19:01.384 16:24:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="cecf105e-f60b-4c9f-a454-481f692ef8ec") .cluster_size' 00:19:01.642 16:24:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:19:01.642 16:24:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096 00:19:01.642 4096 00:19:01.642 16:24:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096 00:19:01.642 16:24:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:19:01.642 9ca131c4-3a11-4a78-a28f-5c39170385f6 00:19:01.642 16:24:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:19:01.901 16:24:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:19:02.159 16:24:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:02.418 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:02.677 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:02.677 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:02.677 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:02.677 16:24:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:02.677 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:02.677 fio-3.35 00:19:02.677 Starting 1 thread 00:19:05.212 00:19:05.212 test: (groupid=0, jobs=1): err= 0: pid=89512: Tue Nov 26 16:24:30 2024 00:19:05.212 read: IOPS=6291, BW=24.6MiB/s (25.8MB/s)(49.4MiB/2009msec) 00:19:05.212 slat (usec): min=2, max=302, avg= 2.70, stdev= 3.61 00:19:05.212 clat (usec): min=3029, max=18766, avg=10623.09, stdev=851.65 00:19:05.212 lat (usec): min=3039, max=18768, avg=10625.79, stdev=851.38 00:19:05.212 clat percentiles (usec): 00:19:05.212 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:19:05.212 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:19:05.212 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11600], 95.00th=[11994], 00:19:05.212 | 99.00th=[12387], 99.50th=[12780], 99.90th=[16581], 99.95th=[17695], 00:19:05.212 | 99.99th=[18744] 00:19:05.212 bw ( KiB/s): min=24072, max=25648, per=99.99%, avg=25164.00, stdev=738.21, samples=4 00:19:05.212 iops : min= 6018, max= 6412, avg=6291.00, stdev=184.55, samples=4 00:19:05.212 write: IOPS=6282, BW=24.5MiB/s (25.7MB/s)(49.3MiB/2009msec); 0 zone resets 00:19:05.212 slat (usec): min=2, max=157, avg= 2.80, stdev= 2.23 00:19:05.212 clat (usec): min=2034, max=17827, avg=9637.61, stdev=821.02 00:19:05.212 lat (usec): min=2048, max=17830, avg=9640.41, stdev=820.92 00:19:05.212 clat percentiles (usec): 00:19:05.212 | 1.00th=[ 7963], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 8979], 00:19:05.212 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9765], 00:19:05.212 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10552], 95.00th=[10814], 00:19:05.212 | 99.00th=[11338], 99.50th=[11600], 99.90th=[16450], 99.95th=[16712], 00:19:05.212 | 99.99th=[17695] 00:19:05.212 bw ( KiB/s): min=25040, max=25176, per=99.90%, avg=25106.00, stdev=68.00, samples=4 00:19:05.212 iops : min= 6260, max= 6294, avg=6276.50, stdev=17.00, samples=4 00:19:05.212 lat (msec) : 4=0.06%, 10=44.90%, 20=55.05% 00:19:05.212 cpu : usr=74.15%, sys=20.27%, ctx=21, majf=0, minf=7 00:19:05.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:05.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:05.212 issued rwts: total=12640,12622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.212 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:05.212 00:19:05.212 Run status group 0 (all jobs): 00:19:05.212 READ: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=49.4MiB (51.8MB), run=2009-2009msec 00:19:05.212 WRITE: bw=24.5MiB/s (25.7MB/s), 24.5MiB/s-24.5MiB/s (25.7MB/s-25.7MB/s), io=49.3MiB (51.7MB), run=2009-2009msec 00:19:05.212 16:24:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:05.212 16:24:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:19:05.778 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=a067f71c-e20a-4dd6-ad6e-93369b0ff97d 00:19:05.778 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb a067f71c-e20a-4dd6-ad6e-93369b0ff97d 00:19:05.778 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=a067f71c-e20a-4dd6-ad6e-93369b0ff97d 00:19:05.778 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:19:05.778 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:19:05.778 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:19:05.778 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:05.778 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:19:05.778 { 00:19:05.778 "uuid": "cecf105e-f60b-4c9f-a454-481f692ef8ec", 00:19:05.778 "name": "lvs_0", 00:19:05.778 "base_bdev": "Nvme0n1", 00:19:05.778 "total_data_clusters": 4, 00:19:05.778 "free_clusters": 0, 00:19:05.778 "block_size": 4096, 00:19:05.778 "cluster_size": 1073741824 00:19:05.778 }, 00:19:05.778 { 00:19:05.778 "uuid": "a067f71c-e20a-4dd6-ad6e-93369b0ff97d", 00:19:05.778 "name": "lvs_n_0", 00:19:05.778 "base_bdev": "9ca131c4-3a11-4a78-a28f-5c39170385f6", 00:19:05.778 "total_data_clusters": 1022, 00:19:05.778 "free_clusters": 1022, 00:19:05.778 "block_size": 4096, 00:19:05.778 "cluster_size": 4194304 00:19:05.778 } 00:19:05.778 ]' 00:19:05.778 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="a067f71c-e20a-4dd6-ad6e-93369b0ff97d") .free_clusters' 00:19:05.778 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022 00:19:05.778 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="a067f71c-e20a-4dd6-ad6e-93369b0ff97d") .cluster_size' 00:19:06.036 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:19:06.036 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088 00:19:06.036 4088 00:19:06.036 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088 00:19:06.036 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:19:06.295 c43b6b5d-e769-44b3-a0f4-aad63cea5b7c 00:19:06.295 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:19:06.553 16:24:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:19:06.811 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:07.070 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:07.071 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:07.071 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:07.071 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:07.071 16:24:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:07.071 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:07.071 fio-3.35 00:19:07.071 Starting 1 thread 00:19:09.601 00:19:09.601 test: (groupid=0, jobs=1): err= 0: pid=89590: Tue Nov 26 16:24:34 2024 00:19:09.601 read: IOPS=5701, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2011msec) 00:19:09.601 slat (nsec): min=1917, max=397841, avg=3268.89, stdev=5100.85 00:19:09.601 clat (usec): min=3206, max=19492, avg=11718.34, stdev=981.86 00:19:09.601 lat (usec): min=3215, max=19495, avg=11721.61, stdev=981.38 00:19:09.601 clat percentiles (usec): 00:19:09.601 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:19:09.601 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:19:09.601 | 70.00th=[12125], 80.00th=[12518], 90.00th=[12911], 95.00th=[13304], 00:19:09.601 | 99.00th=[13960], 99.50th=[14353], 99.90th=[17957], 99.95th=[18220], 00:19:09.601 | 99.99th=[19268] 00:19:09.601 bw ( KiB/s): min=21876, max=23384, per=100.00%, avg=22807.00, stdev=654.21, samples=4 00:19:09.601 iops : min= 5469, max= 5846, avg=5701.75, stdev=163.55, samples=4 00:19:09.601 write: IOPS=5681, BW=22.2MiB/s (23.3MB/s)(44.6MiB/2011msec); 0 zone resets 00:19:09.601 slat (nsec): min=1997, max=274971, avg=3385.41, stdev=3975.82 00:19:09.601 clat (usec): min=2836, max=19823, avg=10655.60, stdev=970.54 00:19:09.601 lat (usec): min=2850, max=19827, avg=10658.99, stdev=970.30 00:19:09.601 clat percentiles (usec): 00:19:09.601 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:19:09.601 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:19:09.601 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:19:09.601 | 99.00th=[12780], 99.50th=[13435], 99.90th=[19268], 99.95th=[19530], 00:19:09.601 | 99.99th=[19792] 00:19:09.601 bw ( KiB/s): min=22224, max=23040, per=99.87%, avg=22698.50, stdev=353.26, samples=4 00:19:09.601 iops : min= 5556, max= 5760, avg=5674.50, stdev=88.23, samples=4 00:19:09.601 lat (msec) : 4=0.05%, 10=12.21%, 20=87.74% 00:19:09.601 cpu : usr=71.34%, sys=21.84%, ctx=4, majf=0, minf=7 00:19:09.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:09.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.601 issued rwts: total=11465,11426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.601 00:19:09.601 Run status group 0 (all jobs): 00:19:09.601 READ: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (47.0MB), run=2011-2011msec 00:19:09.601 WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=44.6MiB (46.8MB), run=2011-2011msec 00:19:09.601 16:24:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:09.601 16:24:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:19:09.601 16:24:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:19:09.859 16:24:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:19:10.118 16:24:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:19:10.684 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:19:10.684 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:11.252 rmmod nvme_tcp 00:19:11.252 rmmod nvme_fabrics 00:19:11.252 rmmod nvme_keyring 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 89276 ']' 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 89276 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 89276 ']' 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 89276 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.252 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89276 00:19:11.511 killing process with pid 89276 00:19:11.511 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.511 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.511 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89276' 00:19:11.511 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 89276 00:19:11.511 16:24:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 89276 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:11.511 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:19:11.770 ************************************ 00:19:11.770 END TEST nvmf_fio_host 00:19:11.770 ************************************ 00:19:11.770 00:19:11.770 real 0m19.778s 00:19:11.770 user 1m26.329s 00:19:11.770 sys 0m4.319s 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.770 ************************************ 00:19:11.770 START TEST nvmf_failover 00:19:11.770 ************************************ 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:11.770 * Looking for test storage... 00:19:11.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:19:11.770 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:12.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.028 --rc genhtml_branch_coverage=1 00:19:12.028 --rc genhtml_function_coverage=1 00:19:12.028 --rc genhtml_legend=1 00:19:12.028 --rc geninfo_all_blocks=1 00:19:12.028 --rc geninfo_unexecuted_blocks=1 00:19:12.028 00:19:12.028 ' 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:12.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.028 --rc genhtml_branch_coverage=1 00:19:12.028 --rc genhtml_function_coverage=1 00:19:12.028 --rc genhtml_legend=1 00:19:12.028 --rc geninfo_all_blocks=1 00:19:12.028 --rc geninfo_unexecuted_blocks=1 00:19:12.028 00:19:12.028 ' 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:12.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.028 --rc genhtml_branch_coverage=1 00:19:12.028 --rc genhtml_function_coverage=1 00:19:12.028 --rc genhtml_legend=1 00:19:12.028 --rc geninfo_all_blocks=1 00:19:12.028 --rc geninfo_unexecuted_blocks=1 00:19:12.028 00:19:12.028 ' 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:12.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.028 --rc genhtml_branch_coverage=1 00:19:12.028 --rc genhtml_function_coverage=1 00:19:12.028 --rc genhtml_legend=1 00:19:12.028 --rc geninfo_all_blocks=1 00:19:12.028 --rc geninfo_unexecuted_blocks=1 00:19:12.028 00:19:12.028 ' 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.028 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:12.029 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:12.029 Cannot find device "nvmf_init_br" 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:12.029 Cannot find device "nvmf_init_br2" 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:12.029 Cannot find device "nvmf_tgt_br" 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:12.029 Cannot find device "nvmf_tgt_br2" 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:12.029 Cannot find device "nvmf_init_br" 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:12.029 Cannot find device "nvmf_init_br2" 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:12.029 Cannot find device "nvmf_tgt_br" 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:12.029 Cannot find device "nvmf_tgt_br2" 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:12.029 Cannot find device "nvmf_br" 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:19:12.029 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:12.287 Cannot find device "nvmf_init_if" 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:12.287 Cannot find device "nvmf_init_if2" 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:12.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:12.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:12.287 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:12.288 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:12.288 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:19:12.288 00:19:12.288 --- 10.0.0.3 ping statistics --- 00:19:12.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.288 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:12.288 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:12.288 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:19:12.288 00:19:12.288 --- 10.0.0.4 ping statistics --- 00:19:12.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.288 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:12.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:19:12.288 00:19:12.288 --- 10.0.0.1 ping statistics --- 00:19:12.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.288 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:12.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:19:12.288 00:19:12.288 --- 10.0.0.2 ping statistics --- 00:19:12.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.288 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:12.288 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:12.546 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:12.546 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:12.546 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:12.546 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:12.546 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=89876 00:19:12.547 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:12.547 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 89876 00:19:12.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.547 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 89876 ']' 00:19:12.547 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.547 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.547 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.547 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.547 16:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:12.547 [2024-11-26 16:24:38.011788] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:19:12.547 [2024-11-26 16:24:38.012049] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.547 [2024-11-26 16:24:38.158316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:12.547 [2024-11-26 16:24:38.178657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.547 [2024-11-26 16:24:38.178945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.547 [2024-11-26 16:24:38.179099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.547 [2024-11-26 16:24:38.179148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.547 [2024-11-26 16:24:38.179262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.547 [2024-11-26 16:24:38.180093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.547 [2024-11-26 16:24:38.180223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:12.547 [2024-11-26 16:24:38.180227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.805 [2024-11-26 16:24:38.210790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:12.805 16:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.805 16:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:12.805 16:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:12.805 16:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:12.805 16:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:12.805 16:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.805 16:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:13.063 [2024-11-26 16:24:38.595427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.063 16:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:13.322 Malloc0 00:19:13.322 16:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:13.580 16:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:13.851 16:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:14.109 [2024-11-26 16:24:39.573797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:14.109 16:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:14.368 [2024-11-26 16:24:39.801930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:14.368 16:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:14.626 [2024-11-26 16:24:40.030177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:14.626 16:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:14.626 16:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=89926 00:19:14.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.626 16:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:14.626 16:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 89926 /var/tmp/bdevperf.sock 00:19:14.626 16:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 89926 ']' 00:19:14.626 16:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.626 16:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.626 16:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.627 16:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.627 16:24:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:15.562 16:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.562 16:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:15.562 16:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:15.821 NVMe0n1 00:19:15.821 16:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:16.080 00:19:16.080 16:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=89950 00:19:16.080 16:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:16.080 16:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:17.015 16:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:17.582 16:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:20.900 16:24:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:20.900 00:19:20.900 16:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:21.159 16:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:24.463 16:24:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:24.463 [2024-11-26 16:24:49.827194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:24.463 16:24:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:25.471 16:24:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:25.729 16:24:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 89950 00:19:32.296 { 00:19:32.296 "results": [ 00:19:32.296 { 00:19:32.296 "job": "NVMe0n1", 00:19:32.296 "core_mask": "0x1", 00:19:32.296 "workload": "verify", 00:19:32.296 "status": "finished", 00:19:32.296 "verify_range": { 00:19:32.296 "start": 0, 00:19:32.296 "length": 16384 00:19:32.296 }, 00:19:32.296 "queue_depth": 128, 00:19:32.296 "io_size": 4096, 00:19:32.296 "runtime": 15.008504, 00:19:32.296 "iops": 10085.548832848364, 00:19:32.296 "mibps": 39.39667512831392, 00:19:32.296 "io_failed": 3085, 00:19:32.296 "io_timeout": 0, 00:19:32.296 "avg_latency_us": 12409.711587162756, 00:19:32.296 "min_latency_us": 573.44, 00:19:32.296 "max_latency_us": 14239.185454545455 00:19:32.296 } 00:19:32.296 ], 00:19:32.296 "core_count": 1 00:19:32.296 } 00:19:32.296 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 89926 00:19:32.296 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 89926 ']' 00:19:32.296 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 89926 00:19:32.296 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:32.296 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.296 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89926 00:19:32.296 killing process with pid 89926 00:19:32.296 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.296 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.296 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89926' 00:19:32.296 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 89926 00:19:32.296 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 89926 00:19:32.296 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:32.296 [2024-11-26 16:24:40.102957] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:19:32.297 [2024-11-26 16:24:40.103062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89926 ] 00:19:32.297 [2024-11-26 16:24:40.255780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.297 [2024-11-26 16:24:40.279553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.297 [2024-11-26 16:24:40.312351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:32.297 Running I/O for 15 seconds... 00:19:32.297 7829.00 IOPS, 30.58 MiB/s [2024-11-26T16:24:57.950Z] [2024-11-26 16:24:42.918156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.918207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.918267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.918295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.918322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.918349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.918391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.918418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.918445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.918472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.918499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.918551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.918581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.918608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.918636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.918668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.918696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.918724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.918752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.918779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.918824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.918852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.918896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.918925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.918964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.918979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.918993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.919008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.919021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.919036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.919050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.919065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.919079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.919094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.919108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.919123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.919137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.919152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.919167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.919183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.297 [2024-11-26 16:24:42.919196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.919212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.919225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.919241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.919255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.919271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.919284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.919300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.919321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.297 [2024-11-26 16:24:42.919353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.297 [2024-11-26 16:24:42.919382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.298 [2024-11-26 16:24:42.919483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.919958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.919986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.298 [2024-11-26 16:24:42.920626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.298 [2024-11-26 16:24:42.920639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.920654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.920667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.920698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.920738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.920756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.920771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.920786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.920817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.920833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.920848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.920864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.920879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.920895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.920910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.920926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.920940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.920957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.920971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.921975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.921990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.922004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.922019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.922032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.922062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.299 [2024-11-26 16:24:42.922075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.299 [2024-11-26 16:24:42.922090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:42.922103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:42.922117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:42.922130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:42.922145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:42.922158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:42.922172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:42.922185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:42.922200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:42.922213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:42.922228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:42.922241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:42.922256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:42.922269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:42.922284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:42.922297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:42.922318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:42.922332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:42.922346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:42.922360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:42.922376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:42.922391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:42.922405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e238b0 is same with the state(6) to be set 00:19:32.300 [2024-11-26 16:24:42.922421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.300 [2024-11-26 16:24:42.922431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.300 [2024-11-26 16:24:42.922441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72720 len:8 PRP1 0x0 PRP2 0x0 00:19:32.300 [2024-11-26 16:24:42.922464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:42.922520] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:32.300 [2024-11-26 16:24:42.922576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.300 [2024-11-26 16:24:42.922596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:42.922611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.300 [2024-11-26 16:24:42.922624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:42.922637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.300 [2024-11-26 16:24:42.922650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:42.922663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.300 [2024-11-26 16:24:42.922676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:42.922690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:32.300 [2024-11-26 16:24:42.922739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e009d0 (9): Bad file descriptor 00:19:32.300 [2024-11-26 16:24:42.926550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:32.300 [2024-11-26 16:24:42.951372] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:32.300 8686.00 IOPS, 33.93 MiB/s [2024-11-26T16:24:57.953Z] 9225.33 IOPS, 36.04 MiB/s [2024-11-26T16:24:57.953Z] 9503.00 IOPS, 37.12 MiB/s [2024-11-26T16:24:57.953Z] [2024-11-26 16:24:46.552525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.300 [2024-11-26 16:24:46.552587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.552672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.300 [2024-11-26 16:24:46.552690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.552707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.300 [2024-11-26 16:24:46.552752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.552769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.300 [2024-11-26 16:24:46.552784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.552800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.300 [2024-11-26 16:24:46.552815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.552832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.300 [2024-11-26 16:24:46.552847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.552863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.300 [2024-11-26 16:24:46.552878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.552895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.300 [2024-11-26 16:24:46.552910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.552927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:46.552941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.552958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:46.552972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.552989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:46.553004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.553050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:46.553063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.553093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:46.553106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.553121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:46.553144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.553168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:46.553181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.553196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:46.553210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.553226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:46.553239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.553254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:46.553268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.553283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:46.553300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.553315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.300 [2024-11-26 16:24:46.553328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.300 [2024-11-26 16:24:46.553343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.301 [2024-11-26 16:24:46.553389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.301 [2024-11-26 16:24:46.553436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.301 [2024-11-26 16:24:46.553483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.301 [2024-11-26 16:24:46.553514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.553546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.553578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.553621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.553653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.553685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.553716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.553776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.553820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.553848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.553877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.553905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.553933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.553962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.553977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.553990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.554006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.554025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.554041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.554055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.554070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.301 [2024-11-26 16:24:46.554083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.554113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.301 [2024-11-26 16:24:46.554126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.554141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.301 [2024-11-26 16:24:46.554153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.554168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.301 [2024-11-26 16:24:46.554181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.554196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.301 [2024-11-26 16:24:46.554209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.554224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.301 [2024-11-26 16:24:46.554237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.554252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.301 [2024-11-26 16:24:46.554265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.554280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.301 [2024-11-26 16:24:46.554293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.301 [2024-11-26 16:24:46.554307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.301 [2024-11-26 16:24:46.554320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.554348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.554408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.554460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.554489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.554519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.554548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.554578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.554607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.554636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.554665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.554695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.554739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.554782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.554809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.554837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.302 [2024-11-26 16:24:46.554872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.302 [2024-11-26 16:24:46.554900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.302 [2024-11-26 16:24:46.554927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.302 [2024-11-26 16:24:46.554955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.302 [2024-11-26 16:24:46.554983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.554998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.302 [2024-11-26 16:24:46.555010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.555024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.302 [2024-11-26 16:24:46.555038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.555053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.302 [2024-11-26 16:24:46.555066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.555080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.555093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.555108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.555121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.555135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.555148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.555163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.555175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.555191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.555210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.555225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.555238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.555253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.555266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.555286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.302 [2024-11-26 16:24:46.555300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.555315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.302 [2024-11-26 16:24:46.555327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.555342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.302 [2024-11-26 16:24:46.555371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.555412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.302 [2024-11-26 16:24:46.555428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.555447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.302 [2024-11-26 16:24:46.555460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.555476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.302 [2024-11-26 16:24:46.555490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.302 [2024-11-26 16:24:46.555505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.555519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.555535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.555548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.555564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.555578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.555593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.555607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.555633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.555648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.555664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.555678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.555693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.555707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.555722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.555751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.555781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.555794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.555808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.555821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.555838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.555851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.555866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.303 [2024-11-26 16:24:46.555879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.555893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.303 [2024-11-26 16:24:46.555907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.555921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.303 [2024-11-26 16:24:46.555934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.555951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.303 [2024-11-26 16:24:46.555964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.555979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.303 [2024-11-26 16:24:46.555992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.303 [2024-11-26 16:24:46.556026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.303 [2024-11-26 16:24:46.556055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.303 [2024-11-26 16:24:46.556083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.303 [2024-11-26 16:24:46.556110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.303 [2024-11-26 16:24:46.556138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.303 [2024-11-26 16:24:46.556165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.303 [2024-11-26 16:24:46.556193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.303 [2024-11-26 16:24:46.556220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.303 [2024-11-26 16:24:46.556248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.303 [2024-11-26 16:24:46.556275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.303 [2024-11-26 16:24:46.556320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.556364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.556394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.556443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.556476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.556505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.556535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.303 [2024-11-26 16:24:46.556565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e24b40 is same with the state(6) to be set 00:19:32.303 [2024-11-26 16:24:46.556597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.303 [2024-11-26 16:24:46.556608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.303 [2024-11-26 16:24:46.556619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114760 len:8 PRP1 0x0 PRP2 0x0 00:19:32.303 [2024-11-26 16:24:46.556633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.303 [2024-11-26 16:24:46.556648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.303 [2024-11-26 16:24:46.556672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.304 [2024-11-26 16:24:46.556683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115280 len:8 PRP1 0x0 PRP2 0x0 00:19:32.304 [2024-11-26 16:24:46.556696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:46.556718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.304 [2024-11-26 16:24:46.556746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.304 [2024-11-26 16:24:46.556757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115288 len:8 PRP1 0x0 PRP2 0x0 00:19:32.304 [2024-11-26 16:24:46.556771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:46.556786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.304 [2024-11-26 16:24:46.556796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.304 [2024-11-26 16:24:46.556807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115296 len:8 PRP1 0x0 PRP2 0x0 00:19:32.304 [2024-11-26 16:24:46.556821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:46.556835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.304 [2024-11-26 16:24:46.556845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.304 [2024-11-26 16:24:46.556864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115304 len:8 PRP1 0x0 PRP2 0x0 00:19:32.304 [2024-11-26 16:24:46.556879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:46.556894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.304 [2024-11-26 16:24:46.556904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.304 [2024-11-26 16:24:46.556915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115312 len:8 PRP1 0x0 PRP2 0x0 00:19:32.304 [2024-11-26 16:24:46.556931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:46.556947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.304 [2024-11-26 16:24:46.556957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.304 [2024-11-26 16:24:46.556968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115320 len:8 PRP1 0x0 PRP2 0x0 00:19:32.304 [2024-11-26 16:24:46.556982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:46.556996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.304 [2024-11-26 16:24:46.557006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.304 [2024-11-26 16:24:46.557032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115328 len:8 PRP1 0x0 PRP2 0x0 00:19:32.304 [2024-11-26 16:24:46.557060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:46.557088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.304 [2024-11-26 16:24:46.557112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.304 [2024-11-26 16:24:46.557122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115336 len:8 PRP1 0x0 PRP2 0x0 00:19:32.304 [2024-11-26 16:24:46.557134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:46.557180] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:19:32.304 [2024-11-26 16:24:46.557234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.304 [2024-11-26 16:24:46.557254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:46.557268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.304 [2024-11-26 16:24:46.557281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:46.557294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.304 [2024-11-26 16:24:46.557307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:46.557321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.304 [2024-11-26 16:24:46.557333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:46.557347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:32.304 [2024-11-26 16:24:46.557427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e009d0 (9): Bad file descriptor 00:19:32.304 [2024-11-26 16:24:46.561140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:32.304 [2024-11-26 16:24:46.583012] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:19:32.304 9563.60 IOPS, 37.36 MiB/s [2024-11-26T16:24:57.957Z] 9701.67 IOPS, 37.90 MiB/s [2024-11-26T16:24:57.957Z] 9802.57 IOPS, 38.29 MiB/s [2024-11-26T16:24:57.957Z] 9879.25 IOPS, 38.59 MiB/s [2024-11-26T16:24:57.957Z] 9922.00 IOPS, 38.76 MiB/s [2024-11-26T16:24:57.957Z] [2024-11-26 16:24:51.109866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.304 [2024-11-26 16:24:51.109925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:51.109968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.304 [2024-11-26 16:24:51.109983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:51.109998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.304 [2024-11-26 16:24:51.110012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:51.110026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.304 [2024-11-26 16:24:51.110039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:51.110053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.304 [2024-11-26 16:24:51.110066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:51.110081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.304 [2024-11-26 16:24:51.110093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:51.110107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.304 [2024-11-26 16:24:51.110120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:51.110134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.304 [2024-11-26 16:24:51.110146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:51.110160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.304 [2024-11-26 16:24:51.110173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:51.110187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.304 [2024-11-26 16:24:51.110200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:51.110214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.304 [2024-11-26 16:24:51.110238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:51.110276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.304 [2024-11-26 16:24:51.110290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:51.110304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.304 [2024-11-26 16:24:51.110316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:51.110331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.304 [2024-11-26 16:24:51.110343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:51.110368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.304 [2024-11-26 16:24:51.110385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:51.110399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.304 [2024-11-26 16:24:51.110412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.304 [2024-11-26 16:24:51.110426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.110439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.110484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.110529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.110557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.110587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.110615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.110643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.110681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.110712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.110740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.110769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.110797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.110826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.110854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.110882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.110926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.110953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.110982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.110997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.111010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.111038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.111075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.111103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.111130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.111159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.111186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.111214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.111242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.111269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.111297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.111324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.111352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.305 [2024-11-26 16:24:51.111379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.111425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.111455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.111482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.111527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.111557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.305 [2024-11-26 16:24:51.111572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.305 [2024-11-26 16:24:51.111585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.111600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.306 [2024-11-26 16:24:51.111615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.111631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.306 [2024-11-26 16:24:51.111644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.111659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.306 [2024-11-26 16:24:51.111672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.111688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.306 [2024-11-26 16:24:51.111701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.111717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.306 [2024-11-26 16:24:51.111730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.111745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.306 [2024-11-26 16:24:51.111758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.111773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.306 [2024-11-26 16:24:51.111787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.111802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.306 [2024-11-26 16:24:51.111822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.111838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.306 [2024-11-26 16:24:51.111852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.111867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.306 [2024-11-26 16:24:51.111880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.111895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.306 [2024-11-26 16:24:51.111909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.111924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.306 [2024-11-26 16:24:51.111938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.111968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.306 [2024-11-26 16:24:51.111981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.111996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.306 [2024-11-26 16:24:51.112009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.306 [2024-11-26 16:24:51.112039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.306 [2024-11-26 16:24:51.112067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.306 [2024-11-26 16:24:51.112096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.306 [2024-11-26 16:24:51.112124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.306 [2024-11-26 16:24:51.112151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.306 [2024-11-26 16:24:51.112179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.306 [2024-11-26 16:24:51.112214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.306 [2024-11-26 16:24:51.112242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.306 [2024-11-26 16:24:51.112270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.306 [2024-11-26 16:24:51.112298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.306 [2024-11-26 16:24:51.112326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.306 [2024-11-26 16:24:51.112353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.306 [2024-11-26 16:24:51.112391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.306 [2024-11-26 16:24:51.112420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.306 [2024-11-26 16:24:51.112465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.306 [2024-11-26 16:24:51.112493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.306 [2024-11-26 16:24:51.112523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.306 [2024-11-26 16:24:51.112539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.306 [2024-11-26 16:24:51.112553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.112568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.307 [2024-11-26 16:24:51.112589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.112606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.307 [2024-11-26 16:24:51.112619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.112634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.112647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.112663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.112676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.112691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.112704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.112746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.112762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.112778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.112792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.112808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.112822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.112839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.112853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.112884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.112898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.112913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.112927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.112942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.112960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.112976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.112990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.113028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.113088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.113115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.113143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.113172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.113199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.113227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.113255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.113283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.113310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.113344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.307 [2024-11-26 16:24:51.113390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e24800 is same with the state(6) to be set 00:19:32.307 [2024-11-26 16:24:51.113438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.307 [2024-11-26 16:24:51.113450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.307 [2024-11-26 16:24:51.113461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102776 len:8 PRP1 0x0 PRP2 0x0 00:19:32.307 [2024-11-26 16:24:51.113474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.307 [2024-11-26 16:24:51.113499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.307 [2024-11-26 16:24:51.113509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103168 len:8 PRP1 0x0 PRP2 0x0 00:19:32.307 [2024-11-26 16:24:51.113522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.307 [2024-11-26 16:24:51.113545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.307 [2024-11-26 16:24:51.113555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103176 len:8 PRP1 0x0 PRP2 0x0 00:19:32.307 [2024-11-26 16:24:51.113568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.307 [2024-11-26 16:24:51.113592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.307 [2024-11-26 16:24:51.113602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103184 len:8 PRP1 0x0 PRP2 0x0 00:19:32.307 [2024-11-26 16:24:51.113615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.307 [2024-11-26 16:24:51.113638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.307 [2024-11-26 16:24:51.113648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103192 len:8 PRP1 0x0 PRP2 0x0 00:19:32.307 [2024-11-26 16:24:51.113661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.307 [2024-11-26 16:24:51.113683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.307 [2024-11-26 16:24:51.113694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103200 len:8 PRP1 0x0 PRP2 0x0 00:19:32.307 [2024-11-26 16:24:51.113707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.307 [2024-11-26 16:24:51.113729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.307 [2024-11-26 16:24:51.113754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103208 len:8 PRP1 0x0 PRP2 0x0 00:19:32.307 [2024-11-26 16:24:51.113767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.307 [2024-11-26 16:24:51.113792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.307 [2024-11-26 16:24:51.113801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103216 len:8 PRP1 0x0 PRP2 0x0 00:19:32.307 [2024-11-26 16:24:51.113821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.307 [2024-11-26 16:24:51.113835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.307 [2024-11-26 16:24:51.113845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.308 [2024-11-26 16:24:51.113854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103224 len:8 PRP1 0x0 PRP2 0x0 00:19:32.308 [2024-11-26 16:24:51.113867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.308 [2024-11-26 16:24:51.113880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.308 [2024-11-26 16:24:51.113889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.308 [2024-11-26 16:24:51.113899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103232 len:8 PRP1 0x0 PRP2 0x0 00:19:32.308 [2024-11-26 16:24:51.113912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.308 [2024-11-26 16:24:51.113925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.308 [2024-11-26 16:24:51.113934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.308 [2024-11-26 16:24:51.113944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103240 len:8 PRP1 0x0 PRP2 0x0 00:19:32.308 [2024-11-26 16:24:51.113957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.308 [2024-11-26 16:24:51.113970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.308 [2024-11-26 16:24:51.113979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.308 [2024-11-26 16:24:51.113989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103248 len:8 PRP1 0x0 PRP2 0x0 00:19:32.308 [2024-11-26 16:24:51.114019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.308 [2024-11-26 16:24:51.114032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.308 [2024-11-26 16:24:51.114042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.308 [2024-11-26 16:24:51.114052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103256 len:8 PRP1 0x0 PRP2 0x0 00:19:32.308 [2024-11-26 16:24:51.114065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.308 [2024-11-26 16:24:51.114078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.308 [2024-11-26 16:24:51.114104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.308 [2024-11-26 16:24:51.114114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103264 len:8 PRP1 0x0 PRP2 0x0 00:19:32.308 [2024-11-26 16:24:51.114127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.308 [2024-11-26 16:24:51.114141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.308 [2024-11-26 16:24:51.114150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.308 [2024-11-26 16:24:51.114160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103272 len:8 PRP1 0x0 PRP2 0x0 00:19:32.308 [2024-11-26 16:24:51.114174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.308 [2024-11-26 16:24:51.114189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.308 [2024-11-26 16:24:51.114200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.308 [2024-11-26 16:24:51.114219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103280 len:8 PRP1 0x0 PRP2 0x0 00:19:32.308 [2024-11-26 16:24:51.114233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.308 [2024-11-26 16:24:51.114247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:32.308 [2024-11-26 16:24:51.114257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:32.308 [2024-11-26 16:24:51.114267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103288 len:8 PRP1 0x0 PRP2 0x0 00:19:32.308 [2024-11-26 16:24:51.114281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.308 [2024-11-26 16:24:51.114329] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:19:32.308 [2024-11-26 16:24:51.114417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.308 [2024-11-26 16:24:51.114456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.308 [2024-11-26 16:24:51.114472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.308 [2024-11-26 16:24:51.114486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.308 [2024-11-26 16:24:51.114501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.308 [2024-11-26 16:24:51.114516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.308 [2024-11-26 16:24:51.114530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.308 [2024-11-26 16:24:51.114544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.308 [2024-11-26 16:24:51.114558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:32.308 [2024-11-26 16:24:51.114590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e009d0 (9): Bad file descriptor 00:19:32.308 [2024-11-26 16:24:51.118440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:32.308 [2024-11-26 16:24:51.141518] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:19:32.308 9916.10 IOPS, 38.73 MiB/s [2024-11-26T16:24:57.961Z] 9955.73 IOPS, 38.89 MiB/s [2024-11-26T16:24:57.961Z] 9996.42 IOPS, 39.05 MiB/s [2024-11-26T16:24:57.961Z] 10033.31 IOPS, 39.19 MiB/s [2024-11-26T16:24:57.961Z] 10060.64 IOPS, 39.30 MiB/s [2024-11-26T16:24:57.961Z] 10084.87 IOPS, 39.39 MiB/s 00:19:32.308 Latency(us) 00:19:32.308 [2024-11-26T16:24:57.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.308 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:32.308 Verification LBA range: start 0x0 length 0x4000 00:19:32.308 NVMe0n1 : 15.01 10085.55 39.40 205.55 0.00 12409.71 573.44 14239.19 00:19:32.308 [2024-11-26T16:24:57.961Z] =================================================================================================================== 00:19:32.308 [2024-11-26T16:24:57.961Z] Total : 10085.55 39.40 205.55 0.00 12409.71 573.44 14239.19 00:19:32.308 Received shutdown signal, test time was about 15.000000 seconds 00:19:32.308 00:19:32.308 Latency(us) 00:19:32.308 [2024-11-26T16:24:57.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.308 [2024-11-26T16:24:57.961Z] =================================================================================================================== 00:19:32.308 [2024-11-26T16:24:57.961Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.308 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:32.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.308 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:32.308 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:32.308 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=90123 00:19:32.308 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 90123 /var/tmp/bdevperf.sock 00:19:32.308 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 90123 ']' 00:19:32.308 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.308 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:32.308 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.308 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.308 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.308 16:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:32.308 16:24:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.308 16:24:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:32.308 16:24:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:32.308 [2024-11-26 16:24:57.526267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:32.308 16:24:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:32.308 [2024-11-26 16:24:57.765945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:32.308 16:24:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:32.567 NVMe0n1 00:19:32.567 16:24:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:32.826 00:19:32.826 16:24:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:33.393 00:19:33.393 16:24:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:33.393 16:24:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:33.394 16:24:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:33.652 16:24:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:36.938 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:36.938 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:36.938 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=90192 00:19:36.938 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:36.938 16:25:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 90192 00:19:38.314 { 00:19:38.314 "results": [ 00:19:38.314 { 00:19:38.314 "job": "NVMe0n1", 00:19:38.314 "core_mask": "0x1", 00:19:38.314 "workload": "verify", 00:19:38.314 "status": "finished", 00:19:38.314 "verify_range": { 00:19:38.314 "start": 0, 00:19:38.314 "length": 16384 00:19:38.314 }, 00:19:38.314 "queue_depth": 128, 00:19:38.314 "io_size": 4096, 00:19:38.314 "runtime": 1.006729, 00:19:38.314 "iops": 7776.670782305864, 00:19:38.314 "mibps": 30.37762024338228, 00:19:38.314 "io_failed": 0, 00:19:38.314 "io_timeout": 0, 00:19:38.314 "avg_latency_us": 16397.341167454335, 00:19:38.314 "min_latency_us": 2055.447272727273, 00:19:38.314 "max_latency_us": 13941.294545454546 00:19:38.314 } 00:19:38.314 ], 00:19:38.314 "core_count": 1 00:19:38.314 } 00:19:38.314 16:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:38.314 [2024-11-26 16:24:57.019781] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:19:38.314 [2024-11-26 16:24:57.019884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90123 ] 00:19:38.315 [2024-11-26 16:24:57.164531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.315 [2024-11-26 16:24:57.183706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.315 [2024-11-26 16:24:57.211527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:38.315 [2024-11-26 16:24:59.251888] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:38.315 [2024-11-26 16:24:59.252002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.315 [2024-11-26 16:24:59.252025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.315 [2024-11-26 16:24:59.252041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.315 [2024-11-26 16:24:59.252053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.315 [2024-11-26 16:24:59.252082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.315 [2024-11-26 16:24:59.252094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.315 [2024-11-26 16:24:59.252106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.315 [2024-11-26 16:24:59.252118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.315 [2024-11-26 16:24:59.252131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:19:38.315 [2024-11-26 16:24:59.252174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:19:38.315 [2024-11-26 16:24:59.252203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bda9d0 (9): Bad file descriptor 00:19:38.315 [2024-11-26 16:24:59.254683] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:19:38.315 Running I/O for 1 seconds... 00:19:38.315 7701.00 IOPS, 30.08 MiB/s 00:19:38.315 Latency(us) 00:19:38.315 [2024-11-26T16:25:03.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.315 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:38.315 Verification LBA range: start 0x0 length 0x4000 00:19:38.315 NVMe0n1 : 1.01 7776.67 30.38 0.00 0.00 16397.34 2055.45 13941.29 00:19:38.315 [2024-11-26T16:25:03.968Z] =================================================================================================================== 00:19:38.315 [2024-11-26T16:25:03.968Z] Total : 7776.67 30.38 0.00 0.00 16397.34 2055.45 13941.29 00:19:38.315 16:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:38.315 16:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:38.574 16:25:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:38.833 16:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:38.833 16:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:39.092 16:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:39.092 16:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:42.377 16:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:42.377 16:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:42.377 16:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 90123 00:19:42.377 16:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 90123 ']' 00:19:42.377 16:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 90123 00:19:42.377 16:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:42.377 16:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.378 16:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90123 00:19:42.378 killing process with pid 90123 00:19:42.378 16:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.378 16:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.378 16:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90123' 00:19:42.378 16:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 90123 00:19:42.378 16:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 90123 00:19:42.636 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:42.636 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:42.895 rmmod nvme_tcp 00:19:42.895 rmmod nvme_fabrics 00:19:42.895 rmmod nvme_keyring 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 89876 ']' 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 89876 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 89876 ']' 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 89876 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.895 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89876 00:19:43.154 killing process with pid 89876 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89876' 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 89876 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 89876 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:43.154 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:43.155 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:43.155 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:43.155 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:43.155 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:43.413 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:43.413 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:43.413 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:43.413 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:43.413 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:43.413 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.413 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.413 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.413 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:19:43.413 00:19:43.413 real 0m31.630s 00:19:43.413 user 2m2.165s 00:19:43.413 sys 0m5.321s 00:19:43.413 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.413 16:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:43.413 ************************************ 00:19:43.413 END TEST nvmf_failover 00:19:43.413 ************************************ 00:19:43.413 16:25:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:43.413 16:25:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:43.413 16:25:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:43.413 16:25:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.413 ************************************ 00:19:43.413 START TEST nvmf_host_discovery 00:19:43.413 ************************************ 00:19:43.413 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:43.673 * Looking for test storage... 00:19:43.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:43.673 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:43.673 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:43.673 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:19:43.673 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:43.673 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.673 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.673 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.673 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.673 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.673 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.673 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.673 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.673 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.673 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.673 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:43.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.674 --rc genhtml_branch_coverage=1 00:19:43.674 --rc genhtml_function_coverage=1 00:19:43.674 --rc genhtml_legend=1 00:19:43.674 --rc geninfo_all_blocks=1 00:19:43.674 --rc geninfo_unexecuted_blocks=1 00:19:43.674 00:19:43.674 ' 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:43.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.674 --rc genhtml_branch_coverage=1 00:19:43.674 --rc genhtml_function_coverage=1 00:19:43.674 --rc genhtml_legend=1 00:19:43.674 --rc geninfo_all_blocks=1 00:19:43.674 --rc geninfo_unexecuted_blocks=1 00:19:43.674 00:19:43.674 ' 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:43.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.674 --rc genhtml_branch_coverage=1 00:19:43.674 --rc genhtml_function_coverage=1 00:19:43.674 --rc genhtml_legend=1 00:19:43.674 --rc geninfo_all_blocks=1 00:19:43.674 --rc geninfo_unexecuted_blocks=1 00:19:43.674 00:19:43.674 ' 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:43.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.674 --rc genhtml_branch_coverage=1 00:19:43.674 --rc genhtml_function_coverage=1 00:19:43.674 --rc genhtml_legend=1 00:19:43.674 --rc geninfo_all_blocks=1 00:19:43.674 --rc geninfo_unexecuted_blocks=1 00:19:43.674 00:19:43.674 ' 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:43.674 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:43.674 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:43.675 Cannot find device "nvmf_init_br" 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:43.675 Cannot find device "nvmf_init_br2" 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:43.675 Cannot find device "nvmf_tgt_br" 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:43.675 Cannot find device "nvmf_tgt_br2" 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:43.675 Cannot find device "nvmf_init_br" 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:43.675 Cannot find device "nvmf_init_br2" 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:43.675 Cannot find device "nvmf_tgt_br" 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:19:43.675 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:43.933 Cannot find device "nvmf_tgt_br2" 00:19:43.933 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:19:43.933 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:43.933 Cannot find device "nvmf_br" 00:19:43.933 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:19:43.933 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:43.934 Cannot find device "nvmf_init_if" 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:43.934 Cannot find device "nvmf_init_if2" 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:43.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:43.934 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:43.934 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:44.192 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:44.192 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:44.192 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:44.192 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:44.192 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:44.192 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:44.192 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:44.192 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:44.192 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:44.192 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:44.192 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:19:44.192 00:19:44.192 --- 10.0.0.3 ping statistics --- 00:19:44.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.192 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:44.192 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:44.192 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:44.192 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:19:44.192 00:19:44.192 --- 10.0.0.4 ping statistics --- 00:19:44.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.192 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:19:44.192 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:44.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:44.192 00:19:44.192 --- 10.0.0.1 ping statistics --- 00:19:44.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.193 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:44.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:19:44.193 00:19:44.193 --- 10.0.0.2 ping statistics --- 00:19:44.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.193 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=90512 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 90512 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 90512 ']' 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.193 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.193 [2024-11-26 16:25:09.737638] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:19:44.193 [2024-11-26 16:25:09.737728] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.451 [2024-11-26 16:25:09.885275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.451 [2024-11-26 16:25:09.903389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.451 [2024-11-26 16:25:09.903450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.451 [2024-11-26 16:25:09.903460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.451 [2024-11-26 16:25:09.903467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.451 [2024-11-26 16:25:09.903473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.452 [2024-11-26 16:25:09.903717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.452 [2024-11-26 16:25:09.931851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:44.452 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.452 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:19:44.452 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:44.452 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:44.452 16:25:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.452 [2024-11-26 16:25:10.039450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.452 [2024-11-26 16:25:10.047568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.452 null0 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.452 null1 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=90531 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 90531 /tmp/host.sock 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 90531 ']' 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.452 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.452 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.710 [2024-11-26 16:25:10.134617] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:19:44.710 [2024-11-26 16:25:10.134714] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90531 ] 00:19:44.710 [2024-11-26 16:25:10.286403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.710 [2024-11-26 16:25:10.311009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.710 [2024-11-26 16:25:10.344927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.968 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.226 [2024-11-26 16:25:10.755743] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:45.226 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.227 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:45.227 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:45.227 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:45.227 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.227 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.227 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:45.227 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:45.227 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:45.227 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.227 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:45.227 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:45.227 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:45.227 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:45.227 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:19:45.485 16:25:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:19:46.052 [2024-11-26 16:25:11.412058] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:46.052 [2024-11-26 16:25:11.412233] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:46.052 [2024-11-26 16:25:11.412264] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:46.052 [2024-11-26 16:25:11.418108] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:46.052 [2024-11-26 16:25:11.472620] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:46.052 [2024-11-26 16:25:11.473771] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1216a00:1 started. 00:19:46.052 [2024-11-26 16:25:11.475452] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:46.052 [2024-11-26 16:25:11.475642] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:46.052 [2024-11-26 16:25:11.480872] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1216a00 was disconnected and freed. delete nvme_qpair. 00:19:46.620 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:46.620 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:46.620 16:25:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:46.620 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:46.621 [2024-11-26 16:25:12.234269] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1200810:1 started. 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.621 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.621 [2024-11-26 16:25:12.241292] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1200810 was disconnected and freed. delete nvme_qpair. 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.880 [2024-11-26 16:25:12.345226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:46.880 [2024-11-26 16:25:12.345745] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:46.880 [2024-11-26 16:25:12.345768] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:46.880 [2024-11-26 16:25:12.351763] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:19:46.880 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:46.881 [2024-11-26 16:25:12.416197] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:19:46.881 [2024-11-26 16:25:12.416242] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:46.881 [2024-11-26 16:25:12.416252] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:46.881 [2024-11-26 16:25:12.416258] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.881 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.140 [2024-11-26 16:25:12.582325] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:47.140 [2024-11-26 16:25:12.582366] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:47.140 [2024-11-26 16:25:12.582858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.140 [2024-11-26 16:25:12.582891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.140 [2024-11-26 16:25:12.582920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.140 [2024-11-26 16:25:12.582928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.140 [2024-11-26 16:25:12.582938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.140 [2024-11-26 16:25:12.582947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.140 [2024-11-26 16:25:12.582956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.140 [2024-11-26 16:25:12.582964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.140 [2024-11-26 16:25:12.582973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f2080 is same with the state(6) to be set 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:47.140 [2024-11-26 16:25:12.588338] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:19:47.140 [2024-11-26 16:25:12.588409] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:47.140 [2024-11-26 16:25:12.588499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f2080 (9): Bad file descriptor 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:47.140 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.141 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:47.399 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.400 16:25:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.774 [2024-11-26 16:25:14.002807] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:48.774 [2024-11-26 16:25:14.002998] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:48.774 [2024-11-26 16:25:14.003029] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:48.774 [2024-11-26 16:25:14.008842] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:19:48.774 [2024-11-26 16:25:14.067167] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:19:48.774 [2024-11-26 16:25:14.067954] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x11f1b80:1 started. 00:19:48.774 [2024-11-26 16:25:14.069859] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:48.774 [2024-11-26 16:25:14.070057] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:48.774 [2024-11-26 16:25:14.071859] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x11f1b80 was disconnected and freed. delete nvme_qpair. 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.774 request: 00:19:48.774 { 00:19:48.774 "name": "nvme", 00:19:48.774 "trtype": "tcp", 00:19:48.774 "traddr": "10.0.0.3", 00:19:48.774 "adrfam": "ipv4", 00:19:48.774 "trsvcid": "8009", 00:19:48.774 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:48.774 "wait_for_attach": true, 00:19:48.774 "method": "bdev_nvme_start_discovery", 00:19:48.774 "req_id": 1 00:19:48.774 } 00:19:48.774 Got JSON-RPC error response 00:19:48.774 response: 00:19:48.774 { 00:19:48.774 "code": -17, 00:19:48.774 "message": "File exists" 00:19:48.774 } 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:48.774 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.775 request: 00:19:48.775 { 00:19:48.775 "name": "nvme_second", 00:19:48.775 "trtype": "tcp", 00:19:48.775 "traddr": "10.0.0.3", 00:19:48.775 "adrfam": "ipv4", 00:19:48.775 "trsvcid": "8009", 00:19:48.775 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:48.775 "wait_for_attach": true, 00:19:48.775 "method": "bdev_nvme_start_discovery", 00:19:48.775 "req_id": 1 00:19:48.775 } 00:19:48.775 Got JSON-RPC error response 00:19:48.775 response: 00:19:48.775 { 00:19:48.775 "code": -17, 00:19:48.775 "message": "File exists" 00:19:48.775 } 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.775 16:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.710 [2024-11-26 16:25:15.338432] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:49.710 [2024-11-26 16:25:15.338480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12161d0 with addr=10.0.0.3, port=8010 00:19:49.710 [2024-11-26 16:25:15.338498] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:49.710 [2024-11-26 16:25:15.338507] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:49.710 [2024-11-26 16:25:15.338515] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:51.085 [2024-11-26 16:25:16.338425] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.085 [2024-11-26 16:25:16.338496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12161d0 with addr=10.0.0.3, port=8010 00:19:51.085 [2024-11-26 16:25:16.338514] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:51.085 [2024-11-26 16:25:16.338523] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:51.085 [2024-11-26 16:25:16.338532] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:52.022 [2024-11-26 16:25:17.338311] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:19:52.022 request: 00:19:52.022 { 00:19:52.022 "name": "nvme_second", 00:19:52.022 "trtype": "tcp", 00:19:52.022 "traddr": "10.0.0.3", 00:19:52.022 "adrfam": "ipv4", 00:19:52.022 "trsvcid": "8010", 00:19:52.022 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:52.022 "wait_for_attach": false, 00:19:52.022 "attach_timeout_ms": 3000, 00:19:52.022 "method": "bdev_nvme_start_discovery", 00:19:52.022 "req_id": 1 00:19:52.022 } 00:19:52.022 Got JSON-RPC error response 00:19:52.022 response: 00:19:52.022 { 00:19:52.022 "code": -110, 00:19:52.022 "message": "Connection timed out" 00:19:52.022 } 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 90531 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:52.022 rmmod nvme_tcp 00:19:52.022 rmmod nvme_fabrics 00:19:52.022 rmmod nvme_keyring 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 90512 ']' 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 90512 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 90512 ']' 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 90512 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90512 00:19:52.022 killing process with pid 90512 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90512' 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 90512 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 90512 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:52.022 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:19:52.282 00:19:52.282 real 0m8.867s 00:19:52.282 user 0m16.852s 00:19:52.282 sys 0m1.911s 00:19:52.282 ************************************ 00:19:52.282 END TEST nvmf_host_discovery 00:19:52.282 ************************************ 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.282 16:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.542 ************************************ 00:19:52.542 START TEST nvmf_host_multipath_status 00:19:52.542 ************************************ 00:19:52.542 16:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:52.542 * Looking for test storage... 00:19:52.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:52.542 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:52.542 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:52.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.543 --rc genhtml_branch_coverage=1 00:19:52.543 --rc genhtml_function_coverage=1 00:19:52.543 --rc genhtml_legend=1 00:19:52.543 --rc geninfo_all_blocks=1 00:19:52.543 --rc geninfo_unexecuted_blocks=1 00:19:52.543 00:19:52.543 ' 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:52.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.543 --rc genhtml_branch_coverage=1 00:19:52.543 --rc genhtml_function_coverage=1 00:19:52.543 --rc genhtml_legend=1 00:19:52.543 --rc geninfo_all_blocks=1 00:19:52.543 --rc geninfo_unexecuted_blocks=1 00:19:52.543 00:19:52.543 ' 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:52.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.543 --rc genhtml_branch_coverage=1 00:19:52.543 --rc genhtml_function_coverage=1 00:19:52.543 --rc genhtml_legend=1 00:19:52.543 --rc geninfo_all_blocks=1 00:19:52.543 --rc geninfo_unexecuted_blocks=1 00:19:52.543 00:19:52.543 ' 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:52.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.543 --rc genhtml_branch_coverage=1 00:19:52.543 --rc genhtml_function_coverage=1 00:19:52.543 --rc genhtml_legend=1 00:19:52.543 --rc geninfo_all_blocks=1 00:19:52.543 --rc geninfo_unexecuted_blocks=1 00:19:52.543 00:19:52.543 ' 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:52.543 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:52.543 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:52.544 Cannot find device "nvmf_init_br" 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:52.544 Cannot find device "nvmf_init_br2" 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:52.544 Cannot find device "nvmf_tgt_br" 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:52.544 Cannot find device "nvmf_tgt_br2" 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:19:52.544 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:52.803 Cannot find device "nvmf_init_br" 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:52.803 Cannot find device "nvmf_init_br2" 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:52.803 Cannot find device "nvmf_tgt_br" 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:52.803 Cannot find device "nvmf_tgt_br2" 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:52.803 Cannot find device "nvmf_br" 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:52.803 Cannot find device "nvmf_init_if" 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:52.803 Cannot find device "nvmf_init_if2" 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:52.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:52.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:52.803 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:52.804 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:52.804 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:52.804 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:52.804 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:52.804 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:52.804 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:52.804 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:52.804 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:52.804 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:52.804 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:52.804 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:52.804 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:52.804 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:52.804 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:53.062 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:53.062 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:53.062 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:53.062 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:53.063 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:53.063 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:19:53.063 00:19:53.063 --- 10.0.0.3 ping statistics --- 00:19:53.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.063 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:53.063 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:53.063 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:19:53.063 00:19:53.063 --- 10.0.0.4 ping statistics --- 00:19:53.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.063 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:53.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:19:53.063 00:19:53.063 --- 10.0.0.1 ping statistics --- 00:19:53.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.063 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:53.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:19:53.063 00:19:53.063 --- 10.0.0.2 ping statistics --- 00:19:53.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.063 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=91034 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 91034 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 91034 ']' 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.063 16:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:53.063 [2024-11-26 16:25:18.581678] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:19:53.063 [2024-11-26 16:25:18.581776] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.322 [2024-11-26 16:25:18.729604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:53.322 [2024-11-26 16:25:18.749025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.322 [2024-11-26 16:25:18.749292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.322 [2024-11-26 16:25:18.749487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.322 [2024-11-26 16:25:18.749616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.322 [2024-11-26 16:25:18.749650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.322 [2024-11-26 16:25:18.750492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.322 [2024-11-26 16:25:18.750503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.322 [2024-11-26 16:25:18.778753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:53.888 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.888 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:19:53.888 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.888 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.888 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:54.147 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.147 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=91034 00:19:54.147 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:54.407 [2024-11-26 16:25:19.855196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.407 16:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:54.666 Malloc0 00:19:54.667 16:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:54.925 16:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:54.925 16:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:55.493 [2024-11-26 16:25:20.846868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:55.493 16:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:55.493 [2024-11-26 16:25:21.067050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:55.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.493 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=91088 00:19:55.493 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:55.493 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:55.493 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 91088 /var/tmp/bdevperf.sock 00:19:55.493 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 91088 ']' 00:19:55.493 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.493 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.493 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.493 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.493 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:55.752 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.752 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:19:55.752 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:56.318 16:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:56.577 Nvme0n1 00:19:56.577 16:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:56.835 Nvme0n1 00:19:56.835 16:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:56.835 16:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:59.368 16:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:59.368 16:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:59.368 16:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:59.368 16:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:20:00.304 16:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:20:00.304 16:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:00.305 16:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.305 16:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:00.564 16:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.564 16:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:00.564 16:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.564 16:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:00.823 16:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:00.823 16:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:00.823 16:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:00.823 16:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.083 16:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.083 16:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:01.083 16:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:01.083 16:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.342 16:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.342 16:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:01.342 16:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.342 16:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:01.600 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.600 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:01.600 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.600 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:01.859 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.859 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:20:01.859 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:02.117 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:02.390 16:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:20:03.765 16:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:20:03.765 16:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:03.765 16:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.765 16:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:03.765 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:03.765 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:03.765 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.765 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:04.023 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.023 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:04.023 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.023 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:04.281 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.281 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:04.281 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.281 16:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:04.540 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.540 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:04.540 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.540 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:04.799 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.799 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:04.799 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.799 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:05.057 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.057 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:20:05.057 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:05.317 16:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:05.576 16:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:20:06.513 16:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:20:06.513 16:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:06.513 16:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.513 16:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:06.773 16:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.773 16:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:06.773 16:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.773 16:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:07.032 16:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:07.032 16:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:07.032 16:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.032 16:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:07.291 16:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.291 16:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:07.291 16:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.291 16:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:07.550 16:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.550 16:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:07.550 16:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.550 16:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:07.809 16:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.809 16:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:07.809 16:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.809 16:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:08.067 16:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:08.067 16:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:20:08.067 16:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:08.326 16:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:08.585 16:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:20:09.554 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:20:09.554 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:09.554 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.554 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:09.861 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.861 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:09.861 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.861 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:10.123 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:10.123 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:10.123 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:10.123 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.382 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.382 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:10.382 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:10.382 16:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.642 16:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.642 16:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:10.642 16:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.642 16:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:10.901 16:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.901 16:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:10.901 16:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.901 16:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:11.159 16:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:11.159 16:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:11.159 16:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:11.419 16:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:11.678 16:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:13.056 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:13.057 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:13.057 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.057 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:13.057 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:13.057 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:13.057 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.057 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:13.316 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:13.316 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:13.316 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.316 16:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:13.575 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.575 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:13.575 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.575 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:13.833 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.833 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:13.833 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.833 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:14.091 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:14.091 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:14.091 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.091 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:14.350 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:14.350 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:14.350 16:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:14.608 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:14.867 16:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:15.804 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:15.804 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:15.804 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:15.804 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.372 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:16.372 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:16.372 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.372 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:16.372 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.372 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:16.372 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.372 16:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:16.631 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.631 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:16.631 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:16.631 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.199 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.199 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:17.199 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:17.199 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.199 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:17.199 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:17.199 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.199 16:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:17.457 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.457 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:17.716 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:17.716 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:17.974 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:18.232 16:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:19.609 16:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:19.609 16:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:19.609 16:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.609 16:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:19.609 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.609 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:19.609 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.609 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:19.868 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.868 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:19.868 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:19.868 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.127 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.127 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:20.127 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:20.127 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.386 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.386 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:20.386 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.386 16:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:20.645 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.645 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:20.645 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.645 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:20.905 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.905 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:20.905 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:21.164 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:21.422 16:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:22.359 16:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:22.359 16:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:22.359 16:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.359 16:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:22.618 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:22.618 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:22.618 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.618 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:22.877 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.877 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:22.877 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:22.877 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.137 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.137 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:23.137 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:23.137 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.395 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.395 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:23.395 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.395 16:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:23.664 16:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.664 16:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:23.664 16:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.664 16:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:23.924 16:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.925 16:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:23.925 16:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:24.184 16:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:24.443 16:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:25.380 16:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:25.380 16:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:25.380 16:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.380 16:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:25.638 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.638 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:25.638 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.638 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:25.897 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.897 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:25.897 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.897 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:26.156 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:26.156 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:26.156 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.156 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:26.415 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:26.415 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:26.415 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.415 16:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:26.674 16:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:26.674 16:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:26.674 16:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.674 16:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:26.933 16:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:26.933 16:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:26.933 16:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:27.191 16:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:27.450 16:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:28.840 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:28.840 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:28.840 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.840 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:28.840 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:28.840 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:28.840 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:28.840 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:29.099 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:29.099 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:29.099 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:29.099 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.357 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:29.358 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:29.358 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:29.358 16:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.616 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:29.616 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:29.616 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.616 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:29.875 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:29.875 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:29.875 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.875 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:30.134 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:30.134 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 91088 00:20:30.134 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 91088 ']' 00:20:30.134 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 91088 00:20:30.134 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:20:30.134 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.134 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91088 00:20:30.134 killing process with pid 91088 00:20:30.134 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:30.134 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:30.134 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91088' 00:20:30.134 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 91088 00:20:30.134 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 91088 00:20:30.134 { 00:20:30.134 "results": [ 00:20:30.134 { 00:20:30.134 "job": "Nvme0n1", 00:20:30.134 "core_mask": "0x4", 00:20:30.134 "workload": "verify", 00:20:30.134 "status": "terminated", 00:20:30.134 "verify_range": { 00:20:30.134 "start": 0, 00:20:30.134 "length": 16384 00:20:30.134 }, 00:20:30.134 "queue_depth": 128, 00:20:30.134 "io_size": 4096, 00:20:30.134 "runtime": 33.275121, 00:20:30.134 "iops": 9583.285963107392, 00:20:30.134 "mibps": 37.43471079338825, 00:20:30.134 "io_failed": 0, 00:20:30.134 "io_timeout": 0, 00:20:30.134 "avg_latency_us": 13329.5140874667, 00:20:30.134 "min_latency_us": 621.8472727272728, 00:20:30.134 "max_latency_us": 4026531.84 00:20:30.134 } 00:20:30.134 ], 00:20:30.134 "core_count": 1 00:20:30.134 } 00:20:30.395 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 91088 00:20:30.395 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:30.395 [2024-11-26 16:25:21.139266] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:20:30.395 [2024-11-26 16:25:21.139385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91088 ] 00:20:30.395 [2024-11-26 16:25:21.292188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.395 [2024-11-26 16:25:21.315567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.395 [2024-11-26 16:25:21.348082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:30.395 Running I/O for 90 seconds... 00:20:30.395 7972.00 IOPS, 31.14 MiB/s [2024-11-26T16:25:56.048Z] 8074.00 IOPS, 31.54 MiB/s [2024-11-26T16:25:56.048Z] 8554.67 IOPS, 33.42 MiB/s [2024-11-26T16:25:56.048Z] 9048.00 IOPS, 35.34 MiB/s [2024-11-26T16:25:56.048Z] 9325.60 IOPS, 36.43 MiB/s [2024-11-26T16:25:56.048Z] 9529.33 IOPS, 37.22 MiB/s [2024-11-26T16:25:56.048Z] 9660.57 IOPS, 37.74 MiB/s [2024-11-26T16:25:56.048Z] 9735.88 IOPS, 38.03 MiB/s [2024-11-26T16:25:56.048Z] 9834.33 IOPS, 38.42 MiB/s [2024-11-26T16:25:56.048Z] 9917.20 IOPS, 38.74 MiB/s [2024-11-26T16:25:56.048Z] 9966.91 IOPS, 38.93 MiB/s [2024-11-26T16:25:56.048Z] 10027.00 IOPS, 39.17 MiB/s [2024-11-26T16:25:56.048Z] 10068.00 IOPS, 39.33 MiB/s [2024-11-26T16:25:56.048Z] 10096.79 IOPS, 39.44 MiB/s [2024-11-26T16:25:56.048Z] [2024-11-26 16:25:36.979081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-11-26 16:25:36.979140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:30.395 [2024-11-26 16:25:36.979206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-11-26 16:25:36.979225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:30.395 [2024-11-26 16:25:36.979246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-11-26 16:25:36.979260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:30.395 [2024-11-26 16:25:36.979279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.395 [2024-11-26 16:25:36.979292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.979325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.979385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.979422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.979455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.979488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:123232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.979549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.979583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.979615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.979648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.979680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.979713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.979761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.979813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.979850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.979882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.979914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.979946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.979976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.979991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:123784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.980024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.980057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.980090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.980123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.980156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.980188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.980221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.980253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.980285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.396 [2024-11-26 16:25:36.980318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.980366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.980441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.980477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.980512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.980546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.980580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.980614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.980648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.980682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:30.396 [2024-11-26 16:25:36.980702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.396 [2024-11-26 16:25:36.980716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.980778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-11-26 16:25:36.980792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.980812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-11-26 16:25:36.980827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.980847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-11-26 16:25:36.980861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.980881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-11-26 16:25:36.980903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.980925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-11-26 16:25:36.980939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.980960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-11-26 16:25:36.980975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-11-26 16:25:36.981612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-11-26 16:25:36.981646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-11-26 16:25:36.981679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-11-26 16:25:36.981711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-11-26 16:25:36.981743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-11-26 16:25:36.981775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-11-26 16:25:36.981817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.397 [2024-11-26 16:25:36.981849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.981979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.981998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.982011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.982031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.982044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.982063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.982076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.982095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.982110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:30.397 [2024-11-26 16:25:36.982141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.397 [2024-11-26 16:25:36.982160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.982195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.982237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.982270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.982302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.982334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.982381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.982414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.982446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.982479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.982511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.982544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-11-26 16:25:36.982576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-11-26 16:25:36.982609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-11-26 16:25:36.982649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-11-26 16:25:36.982683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-11-26 16:25:36.982717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-11-26 16:25:36.982750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-11-26 16:25:36.982782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-11-26 16:25:36.982815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-11-26 16:25:36.982854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-11-26 16:25:36.982888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-11-26 16:25:36.982921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-11-26 16:25:36.982952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.982971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-11-26 16:25:36.982984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.983004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-11-26 16:25:36.983017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.983036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-11-26 16:25:36.983057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.983699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.398 [2024-11-26 16:25:36.983726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.983756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.983772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.983797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.983811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.983836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.983850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.983875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.983889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.983914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.983927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.983952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.983965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.983991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.984005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.984074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.984095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.984126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.984142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.984167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.984181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.984206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.984231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:30.398 [2024-11-26 16:25:36.984260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.398 [2024-11-26 16:25:36.984274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:36.984316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:36.984330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:36.984399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:36.984416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:36.984445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:36.984461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:36.984488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:36.984503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:36.984530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:36.984545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:36.984572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:36.984587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:36.984614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:36.984628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:36.984656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:36.984672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:30.399 9765.00 IOPS, 38.14 MiB/s [2024-11-26T16:25:56.052Z] 9154.69 IOPS, 35.76 MiB/s [2024-11-26T16:25:56.052Z] 8616.18 IOPS, 33.66 MiB/s [2024-11-26T16:25:56.052Z] 8137.50 IOPS, 31.79 MiB/s [2024-11-26T16:25:56.052Z] 7990.53 IOPS, 31.21 MiB/s [2024-11-26T16:25:56.052Z] 8112.20 IOPS, 31.69 MiB/s [2024-11-26T16:25:56.052Z] 8224.95 IOPS, 32.13 MiB/s [2024-11-26T16:25:56.052Z] 8467.64 IOPS, 33.08 MiB/s [2024-11-26T16:25:56.052Z] 8630.35 IOPS, 33.71 MiB/s [2024-11-26T16:25:56.052Z] 8850.54 IOPS, 34.57 MiB/s [2024-11-26T16:25:56.052Z] 8937.96 IOPS, 34.91 MiB/s [2024-11-26T16:25:56.052Z] 8991.12 IOPS, 35.12 MiB/s [2024-11-26T16:25:56.052Z] 9039.15 IOPS, 35.31 MiB/s [2024-11-26T16:25:56.052Z] 9172.07 IOPS, 35.83 MiB/s [2024-11-26T16:25:56.052Z] 9274.34 IOPS, 36.23 MiB/s [2024-11-26T16:25:56.052Z] 9418.53 IOPS, 36.79 MiB/s [2024-11-26T16:25:56.052Z] [2024-11-26 16:25:53.036743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:53.036821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.036896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-11-26 16:25:53.036918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.036942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-11-26 16:25:53.036957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.036979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:53.037009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.037044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:53.037058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.037078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:53.037091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.037112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:53.037139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.037158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-11-26 16:25:53.037171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.037190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:53.037203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.037222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:53.037235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.037254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:53.037267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.037286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:53.037300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.039554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:53.039591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.039620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.399 [2024-11-26 16:25:53.039648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.039670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-11-26 16:25:53.039700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.039731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-11-26 16:25:53.039745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.039765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-11-26 16:25:53.039778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.039798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-11-26 16:25:53.039811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.039831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-11-26 16:25:53.039844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:30.399 [2024-11-26 16:25:53.039864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.399 [2024-11-26 16:25:53.039878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:30.399 9515.77 IOPS, 37.17 MiB/s [2024-11-26T16:25:56.052Z] 9551.41 IOPS, 37.31 MiB/s [2024-11-26T16:25:56.052Z] 9579.55 IOPS, 37.42 MiB/s [2024-11-26T16:25:56.052Z] Received shutdown signal, test time was about 33.275834 seconds 00:20:30.399 00:20:30.399 Latency(us) 00:20:30.399 [2024-11-26T16:25:56.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.399 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:30.399 Verification LBA range: start 0x0 length 0x4000 00:20:30.399 Nvme0n1 : 33.28 9583.29 37.43 0.00 0.00 13329.51 621.85 4026531.84 00:20:30.399 [2024-11-26T16:25:56.052Z] =================================================================================================================== 00:20:30.399 [2024-11-26T16:25:56.052Z] Total : 9583.29 37.43 0.00 0.00 13329.51 621.85 4026531.84 00:20:30.399 16:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:30.659 rmmod nvme_tcp 00:20:30.659 rmmod nvme_fabrics 00:20:30.659 rmmod nvme_keyring 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 91034 ']' 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 91034 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 91034 ']' 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 91034 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91034 00:20:30.659 killing process with pid 91034 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91034' 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 91034 00:20:30.659 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 91034 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.918 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.176 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:20:31.176 ************************************ 00:20:31.176 END TEST nvmf_host_multipath_status 00:20:31.176 ************************************ 00:20:31.176 00:20:31.176 real 0m38.658s 00:20:31.176 user 2m5.400s 00:20:31.176 sys 0m10.676s 00:20:31.177 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:31.177 16:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:31.177 16:25:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:31.177 16:25:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:31.177 16:25:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:31.177 16:25:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.177 ************************************ 00:20:31.177 START TEST nvmf_discovery_remove_ifc 00:20:31.177 ************************************ 00:20:31.177 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:31.177 * Looking for test storage... 00:20:31.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:31.177 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:31.177 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:31.177 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:31.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.436 --rc genhtml_branch_coverage=1 00:20:31.436 --rc genhtml_function_coverage=1 00:20:31.436 --rc genhtml_legend=1 00:20:31.436 --rc geninfo_all_blocks=1 00:20:31.436 --rc geninfo_unexecuted_blocks=1 00:20:31.436 00:20:31.436 ' 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:31.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.436 --rc genhtml_branch_coverage=1 00:20:31.436 --rc genhtml_function_coverage=1 00:20:31.436 --rc genhtml_legend=1 00:20:31.436 --rc geninfo_all_blocks=1 00:20:31.436 --rc geninfo_unexecuted_blocks=1 00:20:31.436 00:20:31.436 ' 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:31.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.436 --rc genhtml_branch_coverage=1 00:20:31.436 --rc genhtml_function_coverage=1 00:20:31.436 --rc genhtml_legend=1 00:20:31.436 --rc geninfo_all_blocks=1 00:20:31.436 --rc geninfo_unexecuted_blocks=1 00:20:31.436 00:20:31.436 ' 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:31.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.436 --rc genhtml_branch_coverage=1 00:20:31.436 --rc genhtml_function_coverage=1 00:20:31.436 --rc genhtml_legend=1 00:20:31.436 --rc geninfo_all_blocks=1 00:20:31.436 --rc geninfo_unexecuted_blocks=1 00:20:31.436 00:20:31.436 ' 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:31.436 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:31.437 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:31.437 Cannot find device "nvmf_init_br" 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:31.437 Cannot find device "nvmf_init_br2" 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:31.437 Cannot find device "nvmf_tgt_br" 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:31.437 Cannot find device "nvmf_tgt_br2" 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:31.437 Cannot find device "nvmf_init_br" 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:31.437 Cannot find device "nvmf_init_br2" 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:31.437 Cannot find device "nvmf_tgt_br" 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:31.437 Cannot find device "nvmf_tgt_br2" 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:20:31.437 16:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:31.437 Cannot find device "nvmf_br" 00:20:31.437 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:20:31.437 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:31.437 Cannot find device "nvmf_init_if" 00:20:31.437 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:20:31.437 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:31.437 Cannot find device "nvmf_init_if2" 00:20:31.437 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:20:31.438 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:31.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:31.438 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:20:31.438 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:31.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:31.438 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:20:31.438 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:31.438 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:31.438 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:31.438 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:31.438 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:31.438 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:31.438 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:31.697 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:31.697 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:20:31.697 00:20:31.697 --- 10.0.0.3 ping statistics --- 00:20:31.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.697 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:31.697 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:31.697 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:20:31.697 00:20:31.697 --- 10.0.0.4 ping statistics --- 00:20:31.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.697 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:31.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:31.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:31.697 00:20:31.697 --- 10.0.0.1 ping statistics --- 00:20:31.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.697 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:31.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:31.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:20:31.697 00:20:31.697 --- 10.0.0.2 ping statistics --- 00:20:31.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.697 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=91909 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 91909 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91909 ']' 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.697 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:31.697 [2024-11-26 16:25:57.335853] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:20:31.697 [2024-11-26 16:25:57.335940] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.955 [2024-11-26 16:25:57.488866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.955 [2024-11-26 16:25:57.513064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.955 [2024-11-26 16:25:57.513147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.955 [2024-11-26 16:25:57.513164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.955 [2024-11-26 16:25:57.513174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.955 [2024-11-26 16:25:57.513183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.955 [2024-11-26 16:25:57.513571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.955 [2024-11-26 16:25:57.547735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:31.955 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.955 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:20:31.955 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:31.955 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:31.955 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:32.230 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.230 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:32.230 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.230 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:32.231 [2024-11-26 16:25:57.646339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.231 [2024-11-26 16:25:57.654513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:32.231 null0 00:20:32.231 [2024-11-26 16:25:57.686423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:32.231 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:32.231 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.231 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91928 00:20:32.231 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:32.231 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91928 /tmp/host.sock 00:20:32.231 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91928 ']' 00:20:32.231 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:20:32.231 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.231 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:32.231 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.231 16:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:32.231 [2024-11-26 16:25:57.766904] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:20:32.231 [2024-11-26 16:25:57.767387] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91928 ] 00:20:32.500 [2024-11-26 16:25:57.920145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.500 [2024-11-26 16:25:57.945088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.500 16:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.500 16:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:20:32.500 16:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:32.500 16:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:32.500 16:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.500 16:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:32.500 16:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.501 16:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:32.501 16:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.501 16:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:32.501 [2024-11-26 16:25:58.070836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:32.501 16:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.501 16:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:32.501 16:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.501 16:25:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:33.877 [2024-11-26 16:25:59.109155] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:33.877 [2024-11-26 16:25:59.109178] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:33.877 [2024-11-26 16:25:59.109194] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:33.877 [2024-11-26 16:25:59.115189] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:33.877 [2024-11-26 16:25:59.169575] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:20:33.877 [2024-11-26 16:25:59.170273] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x115f3f0:1 started. 00:20:33.877 [2024-11-26 16:25:59.171958] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:33.877 [2024-11-26 16:25:59.172024] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:33.877 [2024-11-26 16:25:59.172047] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:33.877 [2024-11-26 16:25:59.172078] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:33.877 [2024-11-26 16:25:59.172098] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.877 [2024-11-26 16:25:59.178045] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x115f3f0 was disconnected and freed. delete nvme_qpair. 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:33.877 16:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:34.814 16:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:34.814 16:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:34.814 16:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.814 16:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:34.814 16:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:34.814 16:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:34.814 16:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:34.814 16:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.814 16:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:34.814 16:26:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:35.749 16:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:35.750 16:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:35.750 16:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:35.750 16:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:35.750 16:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:35.750 16:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.750 16:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:35.750 16:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.008 16:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:36.008 16:26:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:36.944 16:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:36.944 16:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:36.944 16:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:36.944 16:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.944 16:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:36.944 16:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:36.944 16:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:36.944 16:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.944 16:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:36.944 16:26:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:37.879 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:37.879 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:37.879 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:37.879 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.879 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:37.879 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:37.879 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:37.879 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.138 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:38.138 16:26:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:39.089 16:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:39.089 16:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:39.089 16:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.089 16:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:39.089 16:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:39.089 16:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:39.089 16:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:39.089 16:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.089 16:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:39.089 16:26:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:39.089 [2024-11-26 16:26:04.610300] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:39.089 [2024-11-26 16:26:04.610408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.089 [2024-11-26 16:26:04.610425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.089 [2024-11-26 16:26:04.610437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.089 [2024-11-26 16:26:04.610446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.089 [2024-11-26 16:26:04.610455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.089 [2024-11-26 16:26:04.610464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.089 [2024-11-26 16:26:04.610473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.089 [2024-11-26 16:26:04.610481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.089 [2024-11-26 16:26:04.610506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.089 [2024-11-26 16:26:04.610515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.089 [2024-11-26 16:26:04.610523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113a910 is same with the state(6) to be set 00:20:39.089 [2024-11-26 16:26:04.620297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113a910 (9): Bad file descriptor 00:20:39.089 [2024-11-26 16:26:04.630313] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:39.089 [2024-11-26 16:26:04.630332] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:39.090 [2024-11-26 16:26:04.630338] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:39.090 [2024-11-26 16:26:04.630355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:39.090 [2024-11-26 16:26:04.630399] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:40.026 16:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:40.026 16:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:40.026 16:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.026 16:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:40.027 16:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:40.027 16:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:40.027 16:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:40.285 [2024-11-26 16:26:05.686412] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:40.285 [2024-11-26 16:26:05.686476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x113a910 with addr=10.0.0.3, port=4420 00:20:40.286 [2024-11-26 16:26:05.686492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113a910 is same with the state(6) to be set 00:20:40.286 [2024-11-26 16:26:05.686522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113a910 (9): Bad file descriptor 00:20:40.286 [2024-11-26 16:26:05.686853] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:20:40.286 [2024-11-26 16:26:05.686883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:40.286 [2024-11-26 16:26:05.686893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:40.286 [2024-11-26 16:26:05.686902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:40.286 [2024-11-26 16:26:05.686909] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:40.286 [2024-11-26 16:26:05.686914] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:40.286 [2024-11-26 16:26:05.686919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:40.286 [2024-11-26 16:26:05.686928] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:40.286 [2024-11-26 16:26:05.686932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:40.286 16:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.286 16:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:40.286 16:26:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:41.224 [2024-11-26 16:26:06.686955] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:41.224 [2024-11-26 16:26:06.686981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:41.224 [2024-11-26 16:26:06.687003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:41.224 [2024-11-26 16:26:06.687027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:41.224 [2024-11-26 16:26:06.687035] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:20:41.224 [2024-11-26 16:26:06.687042] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:41.224 [2024-11-26 16:26:06.687048] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:41.224 [2024-11-26 16:26:06.687053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:41.224 [2024-11-26 16:26:06.687080] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:20:41.224 [2024-11-26 16:26:06.687111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.224 [2024-11-26 16:26:06.687125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.224 [2024-11-26 16:26:06.687136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.224 [2024-11-26 16:26:06.687143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.224 [2024-11-26 16:26:06.687151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.224 [2024-11-26 16:26:06.687159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.224 [2024-11-26 16:26:06.687167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.224 [2024-11-26 16:26:06.687174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.224 [2024-11-26 16:26:06.687182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.224 [2024-11-26 16:26:06.687190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.224 [2024-11-26 16:26:06.687198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:20:41.224 [2024-11-26 16:26:06.687457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128e50 (9): Bad file descriptor 00:20:41.224 [2024-11-26 16:26:06.688468] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:41.224 [2024-11-26 16:26:06.688661] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:41.224 16:26:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:42.602 16:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:42.602 16:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:42.602 16:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:42.602 16:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.602 16:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:42.602 16:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:42.602 16:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:42.602 16:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.602 16:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:42.602 16:26:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:43.170 [2024-11-26 16:26:08.693147] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:43.170 [2024-11-26 16:26:08.693320] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:43.170 [2024-11-26 16:26:08.693352] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:43.170 [2024-11-26 16:26:08.699178] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:20:43.170 [2024-11-26 16:26:08.753589] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:20:43.170 [2024-11-26 16:26:08.754363] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1114690:1 started. 00:20:43.170 [2024-11-26 16:26:08.755528] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:43.170 [2024-11-26 16:26:08.755705] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:43.170 [2024-11-26 16:26:08.755764] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:43.170 [2024-11-26 16:26:08.755865] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:20:43.170 [2024-11-26 16:26:08.755921] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:43.170 [2024-11-26 16:26:08.761843] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1114690 was disconnected and freed. delete nvme_qpair. 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91928 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91928 ']' 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91928 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91928 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91928' 00:20:43.430 killing process with pid 91928 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91928 00:20:43.430 16:26:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91928 00:20:43.707 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:43.707 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:43.707 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:20:43.707 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:43.707 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:20:43.707 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:43.707 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:43.707 rmmod nvme_tcp 00:20:43.708 rmmod nvme_fabrics 00:20:43.708 rmmod nvme_keyring 00:20:43.708 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:43.708 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:20:43.708 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:20:43.708 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 91909 ']' 00:20:43.708 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 91909 00:20:43.708 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91909 ']' 00:20:43.708 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91909 00:20:43.708 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:20:43.708 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.708 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91909 00:20:43.708 killing process with pid 91909 00:20:43.708 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:43.708 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:43.708 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91909' 00:20:43.708 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91909 00:20:43.708 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91909 00:20:43.971 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:43.971 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:43.971 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.972 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:20:44.231 ************************************ 00:20:44.231 END TEST nvmf_discovery_remove_ifc 00:20:44.231 ************************************ 00:20:44.231 00:20:44.231 real 0m12.995s 00:20:44.231 user 0m22.111s 00:20:44.231 sys 0m2.394s 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.231 ************************************ 00:20:44.231 START TEST nvmf_identify_kernel_target 00:20:44.231 ************************************ 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:44.231 * Looking for test storage... 00:20:44.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:44.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.231 --rc genhtml_branch_coverage=1 00:20:44.231 --rc genhtml_function_coverage=1 00:20:44.231 --rc genhtml_legend=1 00:20:44.231 --rc geninfo_all_blocks=1 00:20:44.231 --rc geninfo_unexecuted_blocks=1 00:20:44.231 00:20:44.231 ' 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:44.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.231 --rc genhtml_branch_coverage=1 00:20:44.231 --rc genhtml_function_coverage=1 00:20:44.231 --rc genhtml_legend=1 00:20:44.231 --rc geninfo_all_blocks=1 00:20:44.231 --rc geninfo_unexecuted_blocks=1 00:20:44.231 00:20:44.231 ' 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:44.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.231 --rc genhtml_branch_coverage=1 00:20:44.231 --rc genhtml_function_coverage=1 00:20:44.231 --rc genhtml_legend=1 00:20:44.231 --rc geninfo_all_blocks=1 00:20:44.231 --rc geninfo_unexecuted_blocks=1 00:20:44.231 00:20:44.231 ' 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:44.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.231 --rc genhtml_branch_coverage=1 00:20:44.231 --rc genhtml_function_coverage=1 00:20:44.231 --rc genhtml_legend=1 00:20:44.231 --rc geninfo_all_blocks=1 00:20:44.231 --rc geninfo_unexecuted_blocks=1 00:20:44.231 00:20:44.231 ' 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.231 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:44.232 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.232 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.491 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:44.491 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:44.491 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:44.491 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:44.491 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:44.491 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:44.491 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:44.491 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:44.491 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:44.491 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:44.491 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:44.492 Cannot find device "nvmf_init_br" 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:44.492 Cannot find device "nvmf_init_br2" 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:44.492 Cannot find device "nvmf_tgt_br" 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:44.492 Cannot find device "nvmf_tgt_br2" 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:44.492 Cannot find device "nvmf_init_br" 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:44.492 Cannot find device "nvmf_init_br2" 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:44.492 Cannot find device "nvmf_tgt_br" 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:44.492 Cannot find device "nvmf_tgt_br2" 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:44.492 Cannot find device "nvmf_br" 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:44.492 Cannot find device "nvmf_init_if" 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:20:44.492 16:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:44.492 Cannot find device "nvmf_init_if2" 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:44.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:44.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:44.492 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:44.751 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:44.751 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:20:44.751 00:20:44.751 --- 10.0.0.3 ping statistics --- 00:20:44.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.751 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:44.751 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:44.751 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:20:44.751 00:20:44.751 --- 10.0.0.4 ping statistics --- 00:20:44.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.751 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:44.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:44.751 00:20:44.751 --- 10.0.0.1 ping statistics --- 00:20:44.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.751 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:44.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:20:44.751 00:20:44.751 --- 10.0.0.2 ping statistics --- 00:20:44.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.751 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:44.751 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:45.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:45.009 Waiting for block devices as requested 00:20:45.268 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:45.268 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:45.268 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:45.268 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:45.268 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:45.268 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:45.268 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:45.268 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:45.268 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:45.268 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:45.268 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:45.527 No valid GPT data, bailing 00:20:45.527 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:45.527 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:45.527 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:45.527 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:45.527 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:45.527 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:45.527 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:45.527 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:45.527 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:45.527 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:45.527 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:45.527 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:45.528 16:26:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:45.528 No valid GPT data, bailing 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:45.528 No valid GPT data, bailing 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:45.528 No valid GPT data, bailing 00:20:45.528 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -a 10.0.0.1 -t tcp -s 4420 00:20:45.788 00:20:45.788 Discovery Log Number of Records 2, Generation counter 2 00:20:45.788 =====Discovery Log Entry 0====== 00:20:45.788 trtype: tcp 00:20:45.788 adrfam: ipv4 00:20:45.788 subtype: current discovery subsystem 00:20:45.788 treq: not specified, sq flow control disable supported 00:20:45.788 portid: 1 00:20:45.788 trsvcid: 4420 00:20:45.788 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:45.788 traddr: 10.0.0.1 00:20:45.788 eflags: none 00:20:45.788 sectype: none 00:20:45.788 =====Discovery Log Entry 1====== 00:20:45.788 trtype: tcp 00:20:45.788 adrfam: ipv4 00:20:45.788 subtype: nvme subsystem 00:20:45.788 treq: not specified, sq flow control disable supported 00:20:45.788 portid: 1 00:20:45.788 trsvcid: 4420 00:20:45.788 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:45.788 traddr: 10.0.0.1 00:20:45.788 eflags: none 00:20:45.788 sectype: none 00:20:45.788 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:45.788 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:45.788 ===================================================== 00:20:45.788 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:45.788 ===================================================== 00:20:45.788 Controller Capabilities/Features 00:20:45.788 ================================ 00:20:45.788 Vendor ID: 0000 00:20:45.788 Subsystem Vendor ID: 0000 00:20:45.788 Serial Number: 9567367151d818d75d07 00:20:45.788 Model Number: Linux 00:20:45.788 Firmware Version: 6.8.9-20 00:20:45.788 Recommended Arb Burst: 0 00:20:45.788 IEEE OUI Identifier: 00 00 00 00:20:45.788 Multi-path I/O 00:20:45.788 May have multiple subsystem ports: No 00:20:45.788 May have multiple controllers: No 00:20:45.788 Associated with SR-IOV VF: No 00:20:45.788 Max Data Transfer Size: Unlimited 00:20:45.788 Max Number of Namespaces: 0 00:20:45.788 Max Number of I/O Queues: 1024 00:20:45.788 NVMe Specification Version (VS): 1.3 00:20:45.788 NVMe Specification Version (Identify): 1.3 00:20:45.788 Maximum Queue Entries: 1024 00:20:45.788 Contiguous Queues Required: No 00:20:45.788 Arbitration Mechanisms Supported 00:20:45.788 Weighted Round Robin: Not Supported 00:20:45.788 Vendor Specific: Not Supported 00:20:45.788 Reset Timeout: 7500 ms 00:20:45.788 Doorbell Stride: 4 bytes 00:20:45.788 NVM Subsystem Reset: Not Supported 00:20:45.788 Command Sets Supported 00:20:45.788 NVM Command Set: Supported 00:20:45.788 Boot Partition: Not Supported 00:20:45.788 Memory Page Size Minimum: 4096 bytes 00:20:45.788 Memory Page Size Maximum: 4096 bytes 00:20:45.788 Persistent Memory Region: Not Supported 00:20:45.788 Optional Asynchronous Events Supported 00:20:45.788 Namespace Attribute Notices: Not Supported 00:20:45.788 Firmware Activation Notices: Not Supported 00:20:45.788 ANA Change Notices: Not Supported 00:20:45.788 PLE Aggregate Log Change Notices: Not Supported 00:20:45.788 LBA Status Info Alert Notices: Not Supported 00:20:45.788 EGE Aggregate Log Change Notices: Not Supported 00:20:45.788 Normal NVM Subsystem Shutdown event: Not Supported 00:20:45.788 Zone Descriptor Change Notices: Not Supported 00:20:45.788 Discovery Log Change Notices: Supported 00:20:45.788 Controller Attributes 00:20:45.788 128-bit Host Identifier: Not Supported 00:20:45.788 Non-Operational Permissive Mode: Not Supported 00:20:45.788 NVM Sets: Not Supported 00:20:45.788 Read Recovery Levels: Not Supported 00:20:45.788 Endurance Groups: Not Supported 00:20:45.788 Predictable Latency Mode: Not Supported 00:20:45.788 Traffic Based Keep ALive: Not Supported 00:20:45.788 Namespace Granularity: Not Supported 00:20:45.788 SQ Associations: Not Supported 00:20:45.788 UUID List: Not Supported 00:20:45.788 Multi-Domain Subsystem: Not Supported 00:20:45.788 Fixed Capacity Management: Not Supported 00:20:45.788 Variable Capacity Management: Not Supported 00:20:45.788 Delete Endurance Group: Not Supported 00:20:45.788 Delete NVM Set: Not Supported 00:20:45.788 Extended LBA Formats Supported: Not Supported 00:20:45.788 Flexible Data Placement Supported: Not Supported 00:20:45.788 00:20:45.788 Controller Memory Buffer Support 00:20:45.788 ================================ 00:20:45.788 Supported: No 00:20:45.788 00:20:45.788 Persistent Memory Region Support 00:20:45.788 ================================ 00:20:45.788 Supported: No 00:20:45.788 00:20:45.788 Admin Command Set Attributes 00:20:45.788 ============================ 00:20:45.788 Security Send/Receive: Not Supported 00:20:45.788 Format NVM: Not Supported 00:20:45.788 Firmware Activate/Download: Not Supported 00:20:45.788 Namespace Management: Not Supported 00:20:45.788 Device Self-Test: Not Supported 00:20:45.788 Directives: Not Supported 00:20:45.788 NVMe-MI: Not Supported 00:20:45.788 Virtualization Management: Not Supported 00:20:45.788 Doorbell Buffer Config: Not Supported 00:20:45.788 Get LBA Status Capability: Not Supported 00:20:45.788 Command & Feature Lockdown Capability: Not Supported 00:20:45.788 Abort Command Limit: 1 00:20:45.788 Async Event Request Limit: 1 00:20:45.788 Number of Firmware Slots: N/A 00:20:45.788 Firmware Slot 1 Read-Only: N/A 00:20:45.788 Firmware Activation Without Reset: N/A 00:20:45.788 Multiple Update Detection Support: N/A 00:20:45.788 Firmware Update Granularity: No Information Provided 00:20:45.788 Per-Namespace SMART Log: No 00:20:45.788 Asymmetric Namespace Access Log Page: Not Supported 00:20:45.788 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:45.788 Command Effects Log Page: Not Supported 00:20:45.788 Get Log Page Extended Data: Supported 00:20:45.788 Telemetry Log Pages: Not Supported 00:20:45.788 Persistent Event Log Pages: Not Supported 00:20:45.788 Supported Log Pages Log Page: May Support 00:20:45.788 Commands Supported & Effects Log Page: Not Supported 00:20:45.788 Feature Identifiers & Effects Log Page:May Support 00:20:45.788 NVMe-MI Commands & Effects Log Page: May Support 00:20:45.788 Data Area 4 for Telemetry Log: Not Supported 00:20:45.788 Error Log Page Entries Supported: 1 00:20:45.788 Keep Alive: Not Supported 00:20:45.788 00:20:45.788 NVM Command Set Attributes 00:20:45.788 ========================== 00:20:45.788 Submission Queue Entry Size 00:20:45.788 Max: 1 00:20:45.788 Min: 1 00:20:45.788 Completion Queue Entry Size 00:20:45.788 Max: 1 00:20:45.788 Min: 1 00:20:45.788 Number of Namespaces: 0 00:20:45.788 Compare Command: Not Supported 00:20:45.788 Write Uncorrectable Command: Not Supported 00:20:45.788 Dataset Management Command: Not Supported 00:20:45.788 Write Zeroes Command: Not Supported 00:20:45.788 Set Features Save Field: Not Supported 00:20:45.788 Reservations: Not Supported 00:20:45.788 Timestamp: Not Supported 00:20:45.788 Copy: Not Supported 00:20:45.788 Volatile Write Cache: Not Present 00:20:45.788 Atomic Write Unit (Normal): 1 00:20:45.788 Atomic Write Unit (PFail): 1 00:20:45.788 Atomic Compare & Write Unit: 1 00:20:45.788 Fused Compare & Write: Not Supported 00:20:45.789 Scatter-Gather List 00:20:45.789 SGL Command Set: Supported 00:20:45.789 SGL Keyed: Not Supported 00:20:45.789 SGL Bit Bucket Descriptor: Not Supported 00:20:45.789 SGL Metadata Pointer: Not Supported 00:20:45.789 Oversized SGL: Not Supported 00:20:45.789 SGL Metadata Address: Not Supported 00:20:45.789 SGL Offset: Supported 00:20:45.789 Transport SGL Data Block: Not Supported 00:20:45.789 Replay Protected Memory Block: Not Supported 00:20:45.789 00:20:45.789 Firmware Slot Information 00:20:45.789 ========================= 00:20:45.789 Active slot: 0 00:20:45.789 00:20:45.789 00:20:45.789 Error Log 00:20:45.789 ========= 00:20:45.789 00:20:45.789 Active Namespaces 00:20:45.789 ================= 00:20:45.789 Discovery Log Page 00:20:45.789 ================== 00:20:45.789 Generation Counter: 2 00:20:45.789 Number of Records: 2 00:20:45.789 Record Format: 0 00:20:45.789 00:20:45.789 Discovery Log Entry 0 00:20:45.789 ---------------------- 00:20:45.789 Transport Type: 3 (TCP) 00:20:45.789 Address Family: 1 (IPv4) 00:20:45.789 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:45.789 Entry Flags: 00:20:45.789 Duplicate Returned Information: 0 00:20:45.789 Explicit Persistent Connection Support for Discovery: 0 00:20:45.789 Transport Requirements: 00:20:45.789 Secure Channel: Not Specified 00:20:45.789 Port ID: 1 (0x0001) 00:20:45.789 Controller ID: 65535 (0xffff) 00:20:45.789 Admin Max SQ Size: 32 00:20:45.789 Transport Service Identifier: 4420 00:20:45.789 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:45.789 Transport Address: 10.0.0.1 00:20:45.789 Discovery Log Entry 1 00:20:45.789 ---------------------- 00:20:45.789 Transport Type: 3 (TCP) 00:20:45.789 Address Family: 1 (IPv4) 00:20:45.789 Subsystem Type: 2 (NVM Subsystem) 00:20:45.789 Entry Flags: 00:20:45.789 Duplicate Returned Information: 0 00:20:45.789 Explicit Persistent Connection Support for Discovery: 0 00:20:45.789 Transport Requirements: 00:20:45.789 Secure Channel: Not Specified 00:20:45.789 Port ID: 1 (0x0001) 00:20:45.789 Controller ID: 65535 (0xffff) 00:20:45.789 Admin Max SQ Size: 32 00:20:45.789 Transport Service Identifier: 4420 00:20:45.789 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:45.789 Transport Address: 10.0.0.1 00:20:45.789 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:46.049 get_feature(0x01) failed 00:20:46.049 get_feature(0x02) failed 00:20:46.049 get_feature(0x04) failed 00:20:46.049 ===================================================== 00:20:46.049 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:46.049 ===================================================== 00:20:46.049 Controller Capabilities/Features 00:20:46.049 ================================ 00:20:46.049 Vendor ID: 0000 00:20:46.049 Subsystem Vendor ID: 0000 00:20:46.049 Serial Number: b6e0a883ec9c4a82e2bf 00:20:46.049 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:46.049 Firmware Version: 6.8.9-20 00:20:46.049 Recommended Arb Burst: 6 00:20:46.049 IEEE OUI Identifier: 00 00 00 00:20:46.049 Multi-path I/O 00:20:46.049 May have multiple subsystem ports: Yes 00:20:46.049 May have multiple controllers: Yes 00:20:46.049 Associated with SR-IOV VF: No 00:20:46.049 Max Data Transfer Size: Unlimited 00:20:46.049 Max Number of Namespaces: 1024 00:20:46.049 Max Number of I/O Queues: 128 00:20:46.049 NVMe Specification Version (VS): 1.3 00:20:46.049 NVMe Specification Version (Identify): 1.3 00:20:46.049 Maximum Queue Entries: 1024 00:20:46.049 Contiguous Queues Required: No 00:20:46.049 Arbitration Mechanisms Supported 00:20:46.049 Weighted Round Robin: Not Supported 00:20:46.049 Vendor Specific: Not Supported 00:20:46.049 Reset Timeout: 7500 ms 00:20:46.049 Doorbell Stride: 4 bytes 00:20:46.049 NVM Subsystem Reset: Not Supported 00:20:46.049 Command Sets Supported 00:20:46.049 NVM Command Set: Supported 00:20:46.049 Boot Partition: Not Supported 00:20:46.049 Memory Page Size Minimum: 4096 bytes 00:20:46.049 Memory Page Size Maximum: 4096 bytes 00:20:46.049 Persistent Memory Region: Not Supported 00:20:46.049 Optional Asynchronous Events Supported 00:20:46.049 Namespace Attribute Notices: Supported 00:20:46.049 Firmware Activation Notices: Not Supported 00:20:46.049 ANA Change Notices: Supported 00:20:46.049 PLE Aggregate Log Change Notices: Not Supported 00:20:46.049 LBA Status Info Alert Notices: Not Supported 00:20:46.049 EGE Aggregate Log Change Notices: Not Supported 00:20:46.049 Normal NVM Subsystem Shutdown event: Not Supported 00:20:46.049 Zone Descriptor Change Notices: Not Supported 00:20:46.049 Discovery Log Change Notices: Not Supported 00:20:46.049 Controller Attributes 00:20:46.049 128-bit Host Identifier: Supported 00:20:46.049 Non-Operational Permissive Mode: Not Supported 00:20:46.049 NVM Sets: Not Supported 00:20:46.049 Read Recovery Levels: Not Supported 00:20:46.049 Endurance Groups: Not Supported 00:20:46.049 Predictable Latency Mode: Not Supported 00:20:46.049 Traffic Based Keep ALive: Supported 00:20:46.049 Namespace Granularity: Not Supported 00:20:46.049 SQ Associations: Not Supported 00:20:46.049 UUID List: Not Supported 00:20:46.049 Multi-Domain Subsystem: Not Supported 00:20:46.049 Fixed Capacity Management: Not Supported 00:20:46.049 Variable Capacity Management: Not Supported 00:20:46.049 Delete Endurance Group: Not Supported 00:20:46.049 Delete NVM Set: Not Supported 00:20:46.049 Extended LBA Formats Supported: Not Supported 00:20:46.049 Flexible Data Placement Supported: Not Supported 00:20:46.049 00:20:46.049 Controller Memory Buffer Support 00:20:46.049 ================================ 00:20:46.049 Supported: No 00:20:46.049 00:20:46.049 Persistent Memory Region Support 00:20:46.049 ================================ 00:20:46.049 Supported: No 00:20:46.049 00:20:46.049 Admin Command Set Attributes 00:20:46.049 ============================ 00:20:46.049 Security Send/Receive: Not Supported 00:20:46.049 Format NVM: Not Supported 00:20:46.049 Firmware Activate/Download: Not Supported 00:20:46.049 Namespace Management: Not Supported 00:20:46.049 Device Self-Test: Not Supported 00:20:46.049 Directives: Not Supported 00:20:46.049 NVMe-MI: Not Supported 00:20:46.049 Virtualization Management: Not Supported 00:20:46.049 Doorbell Buffer Config: Not Supported 00:20:46.049 Get LBA Status Capability: Not Supported 00:20:46.049 Command & Feature Lockdown Capability: Not Supported 00:20:46.049 Abort Command Limit: 4 00:20:46.049 Async Event Request Limit: 4 00:20:46.049 Number of Firmware Slots: N/A 00:20:46.049 Firmware Slot 1 Read-Only: N/A 00:20:46.049 Firmware Activation Without Reset: N/A 00:20:46.049 Multiple Update Detection Support: N/A 00:20:46.049 Firmware Update Granularity: No Information Provided 00:20:46.049 Per-Namespace SMART Log: Yes 00:20:46.049 Asymmetric Namespace Access Log Page: Supported 00:20:46.049 ANA Transition Time : 10 sec 00:20:46.049 00:20:46.049 Asymmetric Namespace Access Capabilities 00:20:46.049 ANA Optimized State : Supported 00:20:46.049 ANA Non-Optimized State : Supported 00:20:46.049 ANA Inaccessible State : Supported 00:20:46.049 ANA Persistent Loss State : Supported 00:20:46.049 ANA Change State : Supported 00:20:46.049 ANAGRPID is not changed : No 00:20:46.049 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:46.049 00:20:46.049 ANA Group Identifier Maximum : 128 00:20:46.049 Number of ANA Group Identifiers : 128 00:20:46.049 Max Number of Allowed Namespaces : 1024 00:20:46.049 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:46.049 Command Effects Log Page: Supported 00:20:46.049 Get Log Page Extended Data: Supported 00:20:46.049 Telemetry Log Pages: Not Supported 00:20:46.049 Persistent Event Log Pages: Not Supported 00:20:46.049 Supported Log Pages Log Page: May Support 00:20:46.049 Commands Supported & Effects Log Page: Not Supported 00:20:46.049 Feature Identifiers & Effects Log Page:May Support 00:20:46.049 NVMe-MI Commands & Effects Log Page: May Support 00:20:46.049 Data Area 4 for Telemetry Log: Not Supported 00:20:46.049 Error Log Page Entries Supported: 128 00:20:46.049 Keep Alive: Supported 00:20:46.050 Keep Alive Granularity: 1000 ms 00:20:46.050 00:20:46.050 NVM Command Set Attributes 00:20:46.050 ========================== 00:20:46.050 Submission Queue Entry Size 00:20:46.050 Max: 64 00:20:46.050 Min: 64 00:20:46.050 Completion Queue Entry Size 00:20:46.050 Max: 16 00:20:46.050 Min: 16 00:20:46.050 Number of Namespaces: 1024 00:20:46.050 Compare Command: Not Supported 00:20:46.050 Write Uncorrectable Command: Not Supported 00:20:46.050 Dataset Management Command: Supported 00:20:46.050 Write Zeroes Command: Supported 00:20:46.050 Set Features Save Field: Not Supported 00:20:46.050 Reservations: Not Supported 00:20:46.050 Timestamp: Not Supported 00:20:46.050 Copy: Not Supported 00:20:46.050 Volatile Write Cache: Present 00:20:46.050 Atomic Write Unit (Normal): 1 00:20:46.050 Atomic Write Unit (PFail): 1 00:20:46.050 Atomic Compare & Write Unit: 1 00:20:46.050 Fused Compare & Write: Not Supported 00:20:46.050 Scatter-Gather List 00:20:46.050 SGL Command Set: Supported 00:20:46.050 SGL Keyed: Not Supported 00:20:46.050 SGL Bit Bucket Descriptor: Not Supported 00:20:46.050 SGL Metadata Pointer: Not Supported 00:20:46.050 Oversized SGL: Not Supported 00:20:46.050 SGL Metadata Address: Not Supported 00:20:46.050 SGL Offset: Supported 00:20:46.050 Transport SGL Data Block: Not Supported 00:20:46.050 Replay Protected Memory Block: Not Supported 00:20:46.050 00:20:46.050 Firmware Slot Information 00:20:46.050 ========================= 00:20:46.050 Active slot: 0 00:20:46.050 00:20:46.050 Asymmetric Namespace Access 00:20:46.050 =========================== 00:20:46.050 Change Count : 0 00:20:46.050 Number of ANA Group Descriptors : 1 00:20:46.050 ANA Group Descriptor : 0 00:20:46.050 ANA Group ID : 1 00:20:46.050 Number of NSID Values : 1 00:20:46.050 Change Count : 0 00:20:46.050 ANA State : 1 00:20:46.050 Namespace Identifier : 1 00:20:46.050 00:20:46.050 Commands Supported and Effects 00:20:46.050 ============================== 00:20:46.050 Admin Commands 00:20:46.050 -------------- 00:20:46.050 Get Log Page (02h): Supported 00:20:46.050 Identify (06h): Supported 00:20:46.050 Abort (08h): Supported 00:20:46.050 Set Features (09h): Supported 00:20:46.050 Get Features (0Ah): Supported 00:20:46.050 Asynchronous Event Request (0Ch): Supported 00:20:46.050 Keep Alive (18h): Supported 00:20:46.050 I/O Commands 00:20:46.050 ------------ 00:20:46.050 Flush (00h): Supported 00:20:46.050 Write (01h): Supported LBA-Change 00:20:46.050 Read (02h): Supported 00:20:46.050 Write Zeroes (08h): Supported LBA-Change 00:20:46.050 Dataset Management (09h): Supported 00:20:46.050 00:20:46.050 Error Log 00:20:46.050 ========= 00:20:46.050 Entry: 0 00:20:46.050 Error Count: 0x3 00:20:46.050 Submission Queue Id: 0x0 00:20:46.050 Command Id: 0x5 00:20:46.050 Phase Bit: 0 00:20:46.050 Status Code: 0x2 00:20:46.050 Status Code Type: 0x0 00:20:46.050 Do Not Retry: 1 00:20:46.050 Error Location: 0x28 00:20:46.050 LBA: 0x0 00:20:46.050 Namespace: 0x0 00:20:46.050 Vendor Log Page: 0x0 00:20:46.050 ----------- 00:20:46.050 Entry: 1 00:20:46.050 Error Count: 0x2 00:20:46.050 Submission Queue Id: 0x0 00:20:46.050 Command Id: 0x5 00:20:46.050 Phase Bit: 0 00:20:46.050 Status Code: 0x2 00:20:46.050 Status Code Type: 0x0 00:20:46.050 Do Not Retry: 1 00:20:46.050 Error Location: 0x28 00:20:46.050 LBA: 0x0 00:20:46.050 Namespace: 0x0 00:20:46.050 Vendor Log Page: 0x0 00:20:46.050 ----------- 00:20:46.050 Entry: 2 00:20:46.050 Error Count: 0x1 00:20:46.050 Submission Queue Id: 0x0 00:20:46.050 Command Id: 0x4 00:20:46.050 Phase Bit: 0 00:20:46.050 Status Code: 0x2 00:20:46.050 Status Code Type: 0x0 00:20:46.050 Do Not Retry: 1 00:20:46.050 Error Location: 0x28 00:20:46.050 LBA: 0x0 00:20:46.050 Namespace: 0x0 00:20:46.050 Vendor Log Page: 0x0 00:20:46.050 00:20:46.050 Number of Queues 00:20:46.050 ================ 00:20:46.050 Number of I/O Submission Queues: 128 00:20:46.050 Number of I/O Completion Queues: 128 00:20:46.050 00:20:46.050 ZNS Specific Controller Data 00:20:46.050 ============================ 00:20:46.050 Zone Append Size Limit: 0 00:20:46.050 00:20:46.050 00:20:46.050 Active Namespaces 00:20:46.050 ================= 00:20:46.050 get_feature(0x05) failed 00:20:46.050 Namespace ID:1 00:20:46.050 Command Set Identifier: NVM (00h) 00:20:46.050 Deallocate: Supported 00:20:46.050 Deallocated/Unwritten Error: Not Supported 00:20:46.050 Deallocated Read Value: Unknown 00:20:46.050 Deallocate in Write Zeroes: Not Supported 00:20:46.050 Deallocated Guard Field: 0xFFFF 00:20:46.050 Flush: Supported 00:20:46.050 Reservation: Not Supported 00:20:46.050 Namespace Sharing Capabilities: Multiple Controllers 00:20:46.050 Size (in LBAs): 1310720 (5GiB) 00:20:46.050 Capacity (in LBAs): 1310720 (5GiB) 00:20:46.050 Utilization (in LBAs): 1310720 (5GiB) 00:20:46.050 UUID: 19f2dfac-dc02-44da-a4ab-4aed2b9ae253 00:20:46.050 Thin Provisioning: Not Supported 00:20:46.050 Per-NS Atomic Units: Yes 00:20:46.050 Atomic Boundary Size (Normal): 0 00:20:46.050 Atomic Boundary Size (PFail): 0 00:20:46.050 Atomic Boundary Offset: 0 00:20:46.050 NGUID/EUI64 Never Reused: No 00:20:46.050 ANA group ID: 1 00:20:46.050 Namespace Write Protected: No 00:20:46.050 Number of LBA Formats: 1 00:20:46.050 Current LBA Format: LBA Format #00 00:20:46.050 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:46.050 00:20:46.050 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:46.050 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:46.050 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:20:46.050 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:46.050 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:20:46.050 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:46.050 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:46.050 rmmod nvme_tcp 00:20:46.050 rmmod nvme_fabrics 00:20:46.309 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:46.309 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:20:46.309 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:20:46.309 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:20:46.309 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:46.309 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.310 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.569 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:20:46.569 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:46.569 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:46.569 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:20:46.569 16:26:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:46.569 16:26:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:46.569 16:26:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:46.569 16:26:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:46.569 16:26:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:46.569 16:26:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:46.569 16:26:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:47.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:47.395 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:47.395 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:47.395 ************************************ 00:20:47.395 END TEST nvmf_identify_kernel_target 00:20:47.395 ************************************ 00:20:47.395 00:20:47.395 real 0m3.230s 00:20:47.395 user 0m1.148s 00:20:47.395 sys 0m1.463s 00:20:47.395 16:26:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.395 16:26:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.395 16:26:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:47.395 16:26:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:47.395 16:26:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.395 16:26:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.395 ************************************ 00:20:47.395 START TEST nvmf_auth_host 00:20:47.395 ************************************ 00:20:47.395 16:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:47.655 * Looking for test storage... 00:20:47.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:47.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.655 --rc genhtml_branch_coverage=1 00:20:47.655 --rc genhtml_function_coverage=1 00:20:47.655 --rc genhtml_legend=1 00:20:47.655 --rc geninfo_all_blocks=1 00:20:47.655 --rc geninfo_unexecuted_blocks=1 00:20:47.655 00:20:47.655 ' 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:47.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.655 --rc genhtml_branch_coverage=1 00:20:47.655 --rc genhtml_function_coverage=1 00:20:47.655 --rc genhtml_legend=1 00:20:47.655 --rc geninfo_all_blocks=1 00:20:47.655 --rc geninfo_unexecuted_blocks=1 00:20:47.655 00:20:47.655 ' 00:20:47.655 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:47.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.655 --rc genhtml_branch_coverage=1 00:20:47.655 --rc genhtml_function_coverage=1 00:20:47.655 --rc genhtml_legend=1 00:20:47.655 --rc geninfo_all_blocks=1 00:20:47.655 --rc geninfo_unexecuted_blocks=1 00:20:47.655 00:20:47.655 ' 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:47.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.656 --rc genhtml_branch_coverage=1 00:20:47.656 --rc genhtml_function_coverage=1 00:20:47.656 --rc genhtml_legend=1 00:20:47.656 --rc geninfo_all_blocks=1 00:20:47.656 --rc geninfo_unexecuted_blocks=1 00:20:47.656 00:20:47.656 ' 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:47.656 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:47.656 Cannot find device "nvmf_init_br" 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:47.656 Cannot find device "nvmf_init_br2" 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:47.656 Cannot find device "nvmf_tgt_br" 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:47.656 Cannot find device "nvmf_tgt_br2" 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:47.656 Cannot find device "nvmf_init_br" 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:47.656 Cannot find device "nvmf_init_br2" 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:20:47.656 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:47.657 Cannot find device "nvmf_tgt_br" 00:20:47.657 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:20:47.657 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:47.657 Cannot find device "nvmf_tgt_br2" 00:20:47.657 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:20:47.657 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:47.914 Cannot find device "nvmf_br" 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:47.914 Cannot find device "nvmf_init_if" 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:47.914 Cannot find device "nvmf_init_if2" 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:47.914 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:47.914 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:47.914 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:47.915 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:47.915 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:47.915 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:47.915 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:47.915 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:47.915 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:47.915 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:48.173 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:48.173 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:48.173 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:48.173 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:48.173 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:48.173 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:48.173 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:48.173 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:48.173 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:48.173 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:48.173 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:20:48.173 00:20:48.173 --- 10.0.0.3 ping statistics --- 00:20:48.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.173 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:48.173 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:48.173 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:48.173 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:20:48.173 00:20:48.173 --- 10.0.0.4 ping statistics --- 00:20:48.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.173 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:48.173 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:48.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:20:48.173 00:20:48.173 --- 10.0.0.1 ping statistics --- 00:20:48.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.173 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:20:48.173 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:48.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:20:48.173 00:20:48.173 --- 10.0.0.2 ping statistics --- 00:20:48.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.173 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:48.173 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.173 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=92927 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 92927 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 92927 ']' 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.174 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.432 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.432 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:20:48.432 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:48.432 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.432 16:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.432 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.432 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:48.432 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:48.432 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:48.432 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:48.432 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:48.432 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:48.432 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:48.433 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:48.433 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=72606bd970daf64fe10faf47265d95b4 00:20:48.433 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:48.433 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.tCK 00:20:48.433 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 72606bd970daf64fe10faf47265d95b4 0 00:20:48.433 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 72606bd970daf64fe10faf47265d95b4 0 00:20:48.433 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:48.433 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:48.433 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=72606bd970daf64fe10faf47265d95b4 00:20:48.433 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:48.433 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:48.691 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.tCK 00:20:48.691 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.tCK 00:20:48.691 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.tCK 00:20:48.691 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:48.691 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:48.691 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a621760933778a2244e836507b3cc69aa5061dd1edd81e8629a108d67ccf5425 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.rer 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a621760933778a2244e836507b3cc69aa5061dd1edd81e8629a108d67ccf5425 3 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a621760933778a2244e836507b3cc69aa5061dd1edd81e8629a108d67ccf5425 3 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a621760933778a2244e836507b3cc69aa5061dd1edd81e8629a108d67ccf5425 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.rer 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.rer 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.rer 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0d13db21782f84782b73a6f986565c840088b3c9c4eb2cdb 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.iFt 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0d13db21782f84782b73a6f986565c840088b3c9c4eb2cdb 0 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0d13db21782f84782b73a6f986565c840088b3c9c4eb2cdb 0 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0d13db21782f84782b73a6f986565c840088b3c9c4eb2cdb 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.iFt 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.iFt 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.iFt 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d8859102fb051b134da89c7d55884e88956f4552e0756d59 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.MOe 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d8859102fb051b134da89c7d55884e88956f4552e0756d59 2 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d8859102fb051b134da89c7d55884e88956f4552e0756d59 2 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d8859102fb051b134da89c7d55884e88956f4552e0756d59 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.MOe 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.MOe 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.MOe 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d4c28b0d2edee4e2f96d475f5a04efdd 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.EGI 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d4c28b0d2edee4e2f96d475f5a04efdd 1 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d4c28b0d2edee4e2f96d475f5a04efdd 1 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d4c28b0d2edee4e2f96d475f5a04efdd 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:20:48.746 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.EGI 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.EGI 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.EGI 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=78376d0ecd2ab8cb18964fbd5d5ec00c 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.FzG 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 78376d0ecd2ab8cb18964fbd5d5ec00c 1 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 78376d0ecd2ab8cb18964fbd5d5ec00c 1 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=78376d0ecd2ab8cb18964fbd5d5ec00c 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.FzG 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.FzG 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.FzG 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3195a47581bfbcc66abfe88b582dfcea5d853cc90c6b20f8 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4GB 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3195a47581bfbcc66abfe88b582dfcea5d853cc90c6b20f8 2 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3195a47581bfbcc66abfe88b582dfcea5d853cc90c6b20f8 2 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3195a47581bfbcc66abfe88b582dfcea5d853cc90c6b20f8 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4GB 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4GB 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.4GB 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a49749cf53d394ee4b23aec7251a2658 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rEH 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a49749cf53d394ee4b23aec7251a2658 0 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a49749cf53d394ee4b23aec7251a2658 0 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a49749cf53d394ee4b23aec7251a2658 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rEH 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rEH 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.rEH 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:49.005 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a47c010ace3c1e4d7a65780e47f86fc1b7b778123f134e93aa581f4187b3e5a8 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.B5L 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a47c010ace3c1e4d7a65780e47f86fc1b7b778123f134e93aa581f4187b3e5a8 3 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a47c010ace3c1e4d7a65780e47f86fc1b7b778123f134e93aa581f4187b3e5a8 3 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a47c010ace3c1e4d7a65780e47f86fc1b7b778123f134e93aa581f4187b3e5a8 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.B5L 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.B5L 00:20:49.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.B5L 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92927 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 92927 ']' 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.006 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tCK 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.rer ]] 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rer 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.iFt 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.MOe ]] 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MOe 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.EGI 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.FzG ]] 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FzG 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.571 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.572 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.572 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:49.572 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.4GB 00:20:49.572 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.572 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.572 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.572 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.rEH ]] 00:20:49.572 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.rEH 00:20:49.572 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.572 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.572 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.572 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:49.572 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.B5L 00:20:49.572 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.572 16:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:49.572 16:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:49.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:49.830 Waiting for block devices as requested 00:20:49.830 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:50.088 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:50.656 No valid GPT data, bailing 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:50.656 No valid GPT data, bailing 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:50.656 No valid GPT data, bailing 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:50.656 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:50.915 No valid GPT data, bailing 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -a 10.0.0.1 -t tcp -s 4420 00:20:50.915 00:20:50.915 Discovery Log Number of Records 2, Generation counter 2 00:20:50.915 =====Discovery Log Entry 0====== 00:20:50.915 trtype: tcp 00:20:50.915 adrfam: ipv4 00:20:50.915 subtype: current discovery subsystem 00:20:50.915 treq: not specified, sq flow control disable supported 00:20:50.915 portid: 1 00:20:50.915 trsvcid: 4420 00:20:50.915 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:50.915 traddr: 10.0.0.1 00:20:50.915 eflags: none 00:20:50.915 sectype: none 00:20:50.915 =====Discovery Log Entry 1====== 00:20:50.915 trtype: tcp 00:20:50.915 adrfam: ipv4 00:20:50.915 subtype: nvme subsystem 00:20:50.915 treq: not specified, sq flow control disable supported 00:20:50.915 portid: 1 00:20:50.915 trsvcid: 4420 00:20:50.915 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:50.915 traddr: 10.0.0.1 00:20:50.915 eflags: none 00:20:50.915 sectype: none 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:50.915 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.173 nvme0n1 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:51.173 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: ]] 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.174 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.432 nvme0n1 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.432 16:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.432 nvme0n1 00:20:51.432 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.432 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.432 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.432 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.432 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.432 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.691 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.692 nvme0n1 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: ]] 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.692 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.962 nvme0n1 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:51.962 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.963 nvme0n1 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.963 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.221 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.221 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.221 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.221 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.221 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.221 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.221 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.221 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:52.221 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.221 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.221 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:52.221 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:52.221 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:20:52.221 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:20:52.221 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.221 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: ]] 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.480 16:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.480 nvme0n1 00:20:52.480 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.480 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.480 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.480 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.480 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.480 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.739 nvme0n1 00:20:52.739 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.740 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.999 nvme0n1 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:52.999 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: ]] 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.000 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.259 nvme0n1 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.259 nvme0n1 00:20:53.259 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.260 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.260 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.260 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.260 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.260 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.518 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.518 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.518 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.518 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.518 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.518 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.518 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.518 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:53.518 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.518 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.518 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:53.518 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:53.518 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:20:53.518 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:20:53.518 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.518 16:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: ]] 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.089 nvme0n1 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.089 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.348 nvme0n1 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.348 16:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.606 nvme0n1 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.606 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: ]] 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.864 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:54.865 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.865 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:54.865 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:54.865 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:54.865 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:54.865 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.865 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.865 nvme0n1 00:20:54.865 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.865 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.865 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.865 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.865 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.865 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:20:55.122 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.123 nvme0n1 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.123 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.381 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.381 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.381 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.381 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.381 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.381 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.381 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.381 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:55.381 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.381 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.381 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:55.381 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:55.381 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:20:55.381 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:20:55.381 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.381 16:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:56.757 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: ]] 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.758 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.017 nvme0n1 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:57.017 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.276 16:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.536 nvme0n1 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.536 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.794 nvme0n1 00:20:57.794 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.794 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.794 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.794 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.794 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.794 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: ]] 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.053 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.313 nvme0n1 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.313 16:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.880 nvme0n1 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:20:58.880 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: ]] 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.881 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.448 nvme0n1 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.448 16:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.015 nvme0n1 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.015 16:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.581 nvme0n1 00:21:00.581 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.581 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: ]] 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.582 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.157 nvme0n1 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.157 16:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.773 nvme0n1 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: ]] 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:01.773 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.774 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.774 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:01.774 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.774 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:01.774 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:01.774 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:01.774 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.774 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.774 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.774 nvme0n1 00:21:01.774 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.774 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.774 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.774 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.774 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.033 nvme0n1 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.033 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.293 nvme0n1 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:02.293 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: ]] 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.294 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.553 nvme0n1 00:21:02.553 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.553 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.553 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.553 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.553 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.553 16:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:02.553 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.554 nvme0n1 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.554 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: ]] 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.813 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.814 nvme0n1 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.814 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.074 nvme0n1 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.074 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.333 nvme0n1 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: ]] 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.333 nvme0n1 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.333 16:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.593 nvme0n1 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.593 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: ]] 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.594 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.852 nvme0n1 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.852 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.111 nvme0n1 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:21:04.111 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.112 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.372 nvme0n1 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.372 16:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: ]] 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.372 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.633 nvme0n1 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:04.633 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.634 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:04.634 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.634 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.893 nvme0n1 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.893 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: ]] 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.894 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.153 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.153 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.153 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.153 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.153 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.153 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.153 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.153 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.153 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.153 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.153 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.153 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.153 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.153 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.153 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.413 nvme0n1 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.413 16:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.673 nvme0n1 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.673 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.932 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.932 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.932 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.932 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.932 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.932 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.932 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.932 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.192 nvme0n1 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: ]] 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.192 16:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.452 nvme0n1 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.452 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.021 nvme0n1 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: ]] 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.021 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.589 nvme0n1 00:21:07.589 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.589 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.589 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.589 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.589 16:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:07.589 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.590 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.158 nvme0n1 00:21:08.158 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.159 16:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.728 nvme0n1 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: ]] 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.728 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.296 nvme0n1 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.296 16:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.864 nvme0n1 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: ]] 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:09.864 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.865 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:09.865 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:09.865 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:09.865 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.865 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.865 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.124 nvme0n1 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.124 nvme0n1 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.124 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.384 nvme0n1 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.384 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.385 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:10.385 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.385 16:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.644 nvme0n1 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.644 nvme0n1 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.644 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.645 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.645 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: ]] 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.904 nvme0n1 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.904 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.163 nvme0n1 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.163 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.422 nvme0n1 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: ]] 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.422 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.423 16:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.682 nvme0n1 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.682 nvme0n1 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.682 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: ]] 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.942 nvme0n1 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.942 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.202 nvme0n1 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.202 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.462 16:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.462 nvme0n1 00:21:12.462 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.462 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.462 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.462 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.462 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.462 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.721 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: ]] 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.722 nvme0n1 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.722 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.981 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:12.982 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.982 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:12.982 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:12.982 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:12.982 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:12.982 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.982 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.982 nvme0n1 00:21:12.982 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.982 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.982 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.982 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.982 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.982 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: ]] 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.241 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.501 nvme0n1 00:21:13.501 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.501 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.501 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.501 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.501 16:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.501 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.759 nvme0n1 00:21:13.759 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.759 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.759 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.759 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.759 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.759 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:14.017 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:14.018 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:14.018 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.018 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.018 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.276 nvme0n1 00:21:14.276 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.276 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.276 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.276 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.276 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.276 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.276 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: ]] 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.277 16:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.536 nvme0n1 00:21:14.536 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.536 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.536 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.536 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.536 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.796 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.056 nvme0n1 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzI2MDZiZDk3MGRhZjY0ZmUxMGZhZjQ3MjY1ZDk1YjS3zNPR: 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: ]] 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTYyMTc2MDkzMzc3OGEyMjQ0ZTgzNjUwN2IzY2M2OWFhNTA2MWRkMWVkZDgxZTg2MjlhMTA4ZDY3Y2NmNTQyNYGp7+I=: 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.056 16:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.626 nvme0n1 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.626 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.194 nvme0n1 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:16.194 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.195 16:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.764 nvme0n1 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzE5NWE0NzU4MWJmYmNjNjZhYmZlODhiNTgyZGZjZWE1ZDg1M2NjOTBjNmIyMGY4Xj3sCg==: 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: ]] 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTQ5NzQ5Y2Y1M2QzOTRlZTRiMjNhZWM3MjUxYTI2NTjMeYB6: 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.764 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.334 nvme0n1 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTQ3YzAxMGFjZTNjMWU0ZDdhNjU3ODBlNDdmODZmYzFiN2I3NzgxMjNmMTM0ZTkzYWE1ODFmNDE4N2IzZTVhOAjX04M=: 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.334 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.593 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:17.594 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.594 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:17.594 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:17.594 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:17.594 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:17.594 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.594 16:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.852 nvme0n1 00:21:17.853 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.853 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.853 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.853 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.853 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.853 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.113 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.113 request: 00:21:18.113 { 00:21:18.113 "name": "nvme0", 00:21:18.113 "trtype": "tcp", 00:21:18.113 "traddr": "10.0.0.1", 00:21:18.113 "adrfam": "ipv4", 00:21:18.113 "trsvcid": "4420", 00:21:18.113 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:18.113 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:18.113 "prchk_reftag": false, 00:21:18.114 "prchk_guard": false, 00:21:18.114 "hdgst": false, 00:21:18.114 "ddgst": false, 00:21:18.114 "allow_unrecognized_csi": false, 00:21:18.114 "method": "bdev_nvme_attach_controller", 00:21:18.114 "req_id": 1 00:21:18.114 } 00:21:18.114 Got JSON-RPC error response 00:21:18.114 response: 00:21:18.114 { 00:21:18.114 "code": -5, 00:21:18.114 "message": "Input/output error" 00:21:18.114 } 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.114 request: 00:21:18.114 { 00:21:18.114 "name": "nvme0", 00:21:18.114 "trtype": "tcp", 00:21:18.114 "traddr": "10.0.0.1", 00:21:18.114 "adrfam": "ipv4", 00:21:18.114 "trsvcid": "4420", 00:21:18.114 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:18.114 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:18.114 "prchk_reftag": false, 00:21:18.114 "prchk_guard": false, 00:21:18.114 "hdgst": false, 00:21:18.114 "ddgst": false, 00:21:18.114 "dhchap_key": "key2", 00:21:18.114 "allow_unrecognized_csi": false, 00:21:18.114 "method": "bdev_nvme_attach_controller", 00:21:18.114 "req_id": 1 00:21:18.114 } 00:21:18.114 Got JSON-RPC error response 00:21:18.114 response: 00:21:18.114 { 00:21:18.114 "code": -5, 00:21:18.114 "message": "Input/output error" 00:21:18.114 } 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:18.114 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.374 request: 00:21:18.374 { 00:21:18.374 "name": "nvme0", 00:21:18.374 "trtype": "tcp", 00:21:18.374 "traddr": "10.0.0.1", 00:21:18.374 "adrfam": "ipv4", 00:21:18.374 "trsvcid": "4420", 00:21:18.374 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:18.374 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:18.374 "prchk_reftag": false, 00:21:18.374 "prchk_guard": false, 00:21:18.374 "hdgst": false, 00:21:18.374 "ddgst": false, 00:21:18.374 "dhchap_key": "key1", 00:21:18.374 "dhchap_ctrlr_key": "ckey2", 00:21:18.374 "allow_unrecognized_csi": false, 00:21:18.374 "method": "bdev_nvme_attach_controller", 00:21:18.374 "req_id": 1 00:21:18.374 } 00:21:18.374 Got JSON-RPC error response 00:21:18.374 response: 00:21:18.374 { 00:21:18.374 "code": -5, 00:21:18.374 "message": "Input/output error" 00:21:18.374 } 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.374 nvme0n1 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.374 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.374 request: 00:21:18.374 { 00:21:18.374 "name": "nvme0", 00:21:18.374 "dhchap_key": "key1", 00:21:18.374 "dhchap_ctrlr_key": "ckey2", 00:21:18.374 "method": "bdev_nvme_set_keys", 00:21:18.374 "req_id": 1 00:21:18.374 } 00:21:18.374 Got JSON-RPC error response 00:21:18.374 response: 00:21:18.374 { 00:21:18.375 "code": -5, 00:21:18.375 "message": "Input/output error" 00:21:18.375 } 00:21:18.375 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:18.375 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:18.375 16:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:18.375 16:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:18.375 16:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:18.375 16:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.375 16:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:18.375 16:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.375 16:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.634 16:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.634 16:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:18.634 16:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:19.571 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQxM2RiMjE3ODJmODQ3ODJiNzNhNmY5ODY1NjVjODQwMDg4YjNjOWM0ZWIyY2RieMxPow==: 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: ]] 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDg4NTkxMDJmYjA1MWIxMzRkYTg5YzdkNTU4ODRlODg5NTZmNDU1MmUwNzU2ZDU5X3/Eyw==: 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.572 nvme0n1 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:19.572 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDRjMjhiMGQyZWRlZTRlMmY5NmQ0NzVmNWEwNGVmZGTALf+r: 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: ]] 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgzNzZkMGVjZDJhYjhjYjE4OTY0ZmJkNWQ1ZWMwMGM1tjd7: 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.831 request: 00:21:19.831 { 00:21:19.831 "name": "nvme0", 00:21:19.831 "dhchap_key": "key2", 00:21:19.831 "dhchap_ctrlr_key": "ckey1", 00:21:19.831 "method": "bdev_nvme_set_keys", 00:21:19.831 "req_id": 1 00:21:19.831 } 00:21:19.831 Got JSON-RPC error response 00:21:19.831 response: 00:21:19.831 { 00:21:19.831 "code": -13, 00:21:19.831 "message": "Permission denied" 00:21:19.831 } 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:19.831 16:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:20.769 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:20.769 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.769 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.769 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:20.769 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.769 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:21:20.769 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:21:20.769 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:21:20.769 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:20.769 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:20.769 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:21:20.769 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:20.769 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:21:20.769 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:20.769 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:20.769 rmmod nvme_tcp 00:21:20.769 rmmod nvme_fabrics 00:21:20.769 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 92927 ']' 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 92927 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 92927 ']' 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 92927 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92927 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.028 killing process with pid 92927 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92927' 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 92927 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 92927 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:21.028 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:21.288 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:21.288 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:21.288 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:21.288 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:21.288 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.288 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.288 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.288 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:21:21.288 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:21.288 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:21.288 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:21.288 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:21.288 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:21:21.288 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:21.288 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:21.289 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:21.289 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:21.289 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:21.289 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:21.289 16:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:22.227 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:22.227 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:22.227 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:22.227 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.tCK /tmp/spdk.key-null.iFt /tmp/spdk.key-sha256.EGI /tmp/spdk.key-sha384.4GB /tmp/spdk.key-sha512.B5L /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:21:22.227 16:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:22.486 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:22.745 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:22.745 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:22.745 00:21:22.745 real 0m35.211s 00:21:22.745 user 0m32.477s 00:21:22.745 sys 0m3.805s 00:21:22.745 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.745 16:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.745 ************************************ 00:21:22.745 END TEST nvmf_auth_host 00:21:22.745 ************************************ 00:21:22.745 16:26:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:21:22.745 16:26:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:22.745 16:26:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:22.745 16:26:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.745 16:26:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.745 ************************************ 00:21:22.745 START TEST nvmf_digest 00:21:22.745 ************************************ 00:21:22.745 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:22.745 * Looking for test storage... 00:21:22.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:22.745 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:22.745 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:21:22.745 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:23.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.005 --rc genhtml_branch_coverage=1 00:21:23.005 --rc genhtml_function_coverage=1 00:21:23.005 --rc genhtml_legend=1 00:21:23.005 --rc geninfo_all_blocks=1 00:21:23.005 --rc geninfo_unexecuted_blocks=1 00:21:23.005 00:21:23.005 ' 00:21:23.005 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:23.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.005 --rc genhtml_branch_coverage=1 00:21:23.005 --rc genhtml_function_coverage=1 00:21:23.005 --rc genhtml_legend=1 00:21:23.005 --rc geninfo_all_blocks=1 00:21:23.005 --rc geninfo_unexecuted_blocks=1 00:21:23.005 00:21:23.005 ' 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:23.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.006 --rc genhtml_branch_coverage=1 00:21:23.006 --rc genhtml_function_coverage=1 00:21:23.006 --rc genhtml_legend=1 00:21:23.006 --rc geninfo_all_blocks=1 00:21:23.006 --rc geninfo_unexecuted_blocks=1 00:21:23.006 00:21:23.006 ' 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:23.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.006 --rc genhtml_branch_coverage=1 00:21:23.006 --rc genhtml_function_coverage=1 00:21:23.006 --rc genhtml_legend=1 00:21:23.006 --rc geninfo_all_blocks=1 00:21:23.006 --rc geninfo_unexecuted_blocks=1 00:21:23.006 00:21:23.006 ' 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:23.006 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:23.006 Cannot find device "nvmf_init_br" 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:23.006 Cannot find device "nvmf_init_br2" 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:23.006 Cannot find device "nvmf_tgt_br" 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:23.006 Cannot find device "nvmf_tgt_br2" 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:23.006 Cannot find device "nvmf_init_br" 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:23.006 Cannot find device "nvmf_init_br2" 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:23.006 Cannot find device "nvmf_tgt_br" 00:21:23.006 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:23.007 Cannot find device "nvmf_tgt_br2" 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:23.007 Cannot find device "nvmf_br" 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:23.007 Cannot find device "nvmf_init_if" 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:23.007 Cannot find device "nvmf_init_if2" 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:23.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:23.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:23.007 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:23.266 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:23.266 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.314 ms 00:21:23.266 00:21:23.266 --- 10.0.0.3 ping statistics --- 00:21:23.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.266 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:23.266 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:23.266 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:21:23.266 00:21:23.266 --- 10.0.0.4 ping statistics --- 00:21:23.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.266 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:23.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:21:23.266 00:21:23.266 --- 10.0.0.1 ping statistics --- 00:21:23.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.266 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:23.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:21:23.266 00:21:23.266 --- 10.0.0.2 ping statistics --- 00:21:23.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.266 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:23.266 ************************************ 00:21:23.266 START TEST nvmf_digest_clean 00:21:23.266 ************************************ 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=94549 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 94549 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94549 ']' 00:21:23.266 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.267 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:23.267 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.267 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.267 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.267 16:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:23.526 [2024-11-26 16:26:48.963557] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:21:23.526 [2024-11-26 16:26:48.963646] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.526 [2024-11-26 16:26:49.116914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.526 [2024-11-26 16:26:49.140169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.526 [2024-11-26 16:26:49.140228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.526 [2024-11-26 16:26:49.140241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.526 [2024-11-26 16:26:49.140251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.526 [2024-11-26 16:26:49.140259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.526 [2024-11-26 16:26:49.140633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:23.786 [2024-11-26 16:26:49.308543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:23.786 null0 00:21:23.786 [2024-11-26 16:26:49.344189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.786 [2024-11-26 16:26:49.368363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94574 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94574 /var/tmp/bperf.sock 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94574 ']' 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.786 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:23.786 [2024-11-26 16:26:49.432644] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:21:23.786 [2024-11-26 16:26:49.432765] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94574 ] 00:21:24.045 [2024-11-26 16:26:49.584050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.045 [2024-11-26 16:26:49.608130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.045 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.045 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:24.045 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:24.045 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:24.045 16:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:24.612 [2024-11-26 16:26:49.983323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:24.612 16:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:24.612 16:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:24.871 nvme0n1 00:21:24.871 16:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:24.871 16:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:24.871 Running I/O for 2 seconds... 00:21:26.813 17653.00 IOPS, 68.96 MiB/s [2024-11-26T16:26:52.466Z] 17462.50 IOPS, 68.21 MiB/s 00:21:26.813 Latency(us) 00:21:26.813 [2024-11-26T16:26:52.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.813 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:26.813 nvme0n1 : 2.00 17488.68 68.32 0.00 0.00 7313.16 6672.76 21805.61 00:21:26.813 [2024-11-26T16:26:52.466Z] =================================================================================================================== 00:21:26.813 [2024-11-26T16:26:52.466Z] Total : 17488.68 68.32 0.00 0.00 7313.16 6672.76 21805.61 00:21:26.813 { 00:21:26.813 "results": [ 00:21:26.813 { 00:21:26.813 "job": "nvme0n1", 00:21:26.813 "core_mask": "0x2", 00:21:26.813 "workload": "randread", 00:21:26.813 "status": "finished", 00:21:26.813 "queue_depth": 128, 00:21:26.813 "io_size": 4096, 00:21:26.813 "runtime": 2.004325, 00:21:26.813 "iops": 17488.68072792586, 00:21:26.813 "mibps": 68.3151590934604, 00:21:26.813 "io_failed": 0, 00:21:26.813 "io_timeout": 0, 00:21:26.813 "avg_latency_us": 7313.157656016992, 00:21:26.813 "min_latency_us": 6672.756363636364, 00:21:26.813 "max_latency_us": 21805.614545454544 00:21:26.813 } 00:21:26.813 ], 00:21:26.813 "core_count": 1 00:21:26.813 } 00:21:26.813 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:26.813 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:26.813 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:26.813 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:26.813 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:26.813 | select(.opcode=="crc32c") 00:21:26.813 | "\(.module_name) \(.executed)"' 00:21:27.072 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:27.072 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:27.072 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:27.072 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:27.072 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94574 00:21:27.072 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94574 ']' 00:21:27.072 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94574 00:21:27.072 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:27.072 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:27.072 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94574 00:21:27.330 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:27.330 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:27.330 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94574' 00:21:27.330 killing process with pid 94574 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94574 00:21:27.331 Received shutdown signal, test time was about 2.000000 seconds 00:21:27.331 00:21:27.331 Latency(us) 00:21:27.331 [2024-11-26T16:26:52.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.331 [2024-11-26T16:26:52.984Z] =================================================================================================================== 00:21:27.331 [2024-11-26T16:26:52.984Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94574 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94621 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94621 /var/tmp/bperf.sock 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94621 ']' 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:27.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.331 16:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:27.331 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:27.331 Zero copy mechanism will not be used. 00:21:27.331 [2024-11-26 16:26:52.895903] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:21:27.331 [2024-11-26 16:26:52.895986] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94621 ] 00:21:27.589 [2024-11-26 16:26:53.028419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.589 [2024-11-26 16:26:53.046687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.589 16:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.589 16:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:27.589 16:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:27.589 16:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:27.589 16:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:27.847 [2024-11-26 16:26:53.344386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:27.847 16:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:27.847 16:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:28.106 nvme0n1 00:21:28.106 16:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:28.106 16:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:28.365 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:28.365 Zero copy mechanism will not be used. 00:21:28.365 Running I/O for 2 seconds... 00:21:30.237 8896.00 IOPS, 1112.00 MiB/s [2024-11-26T16:26:55.890Z] 8936.00 IOPS, 1117.00 MiB/s 00:21:30.237 Latency(us) 00:21:30.237 [2024-11-26T16:26:55.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.237 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:30.237 nvme0n1 : 2.00 8932.52 1116.56 0.00 0.00 1788.39 1578.82 9115.46 00:21:30.237 [2024-11-26T16:26:55.890Z] =================================================================================================================== 00:21:30.237 [2024-11-26T16:26:55.890Z] Total : 8932.52 1116.56 0.00 0.00 1788.39 1578.82 9115.46 00:21:30.237 { 00:21:30.237 "results": [ 00:21:30.237 { 00:21:30.237 "job": "nvme0n1", 00:21:30.237 "core_mask": "0x2", 00:21:30.237 "workload": "randread", 00:21:30.237 "status": "finished", 00:21:30.237 "queue_depth": 16, 00:21:30.237 "io_size": 131072, 00:21:30.237 "runtime": 2.002571, 00:21:30.237 "iops": 8932.517249076313, 00:21:30.237 "mibps": 1116.564656134539, 00:21:30.237 "io_failed": 0, 00:21:30.237 "io_timeout": 0, 00:21:30.237 "avg_latency_us": 1788.3879395023582, 00:21:30.237 "min_latency_us": 1578.8218181818181, 00:21:30.237 "max_latency_us": 9115.461818181819 00:21:30.237 } 00:21:30.237 ], 00:21:30.237 "core_count": 1 00:21:30.237 } 00:21:30.237 16:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:30.237 16:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:30.237 16:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:30.237 16:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:30.237 16:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:30.237 | select(.opcode=="crc32c") 00:21:30.237 | "\(.module_name) \(.executed)"' 00:21:30.804 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:30.804 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94621 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94621 ']' 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94621 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94621 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:30.805 killing process with pid 94621 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94621' 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94621 00:21:30.805 Received shutdown signal, test time was about 2.000000 seconds 00:21:30.805 00:21:30.805 Latency(us) 00:21:30.805 [2024-11-26T16:26:56.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.805 [2024-11-26T16:26:56.458Z] =================================================================================================================== 00:21:30.805 [2024-11-26T16:26:56.458Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94621 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94668 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94668 /var/tmp/bperf.sock 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94668 ']' 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.805 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:30.805 [2024-11-26 16:26:56.374576] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:21:30.805 [2024-11-26 16:26:56.374675] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94668 ] 00:21:31.064 [2024-11-26 16:26:56.521761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.064 [2024-11-26 16:26:56.542495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.064 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.064 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:31.064 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:31.064 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:31.064 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:31.323 [2024-11-26 16:26:56.909304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:31.323 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:31.323 16:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:31.581 nvme0n1 00:21:31.840 16:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:31.840 16:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:31.840 Running I/O for 2 seconds... 00:21:33.713 19051.00 IOPS, 74.42 MiB/s [2024-11-26T16:26:59.628Z] 19177.50 IOPS, 74.91 MiB/s 00:21:33.975 Latency(us) 00:21:33.975 [2024-11-26T16:26:59.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.975 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:33.975 nvme0n1 : 2.01 19179.35 74.92 0.00 0.00 6661.29 4051.32 14477.50 00:21:33.975 [2024-11-26T16:26:59.628Z] =================================================================================================================== 00:21:33.975 [2024-11-26T16:26:59.628Z] Total : 19179.35 74.92 0.00 0.00 6661.29 4051.32 14477.50 00:21:33.975 { 00:21:33.975 "results": [ 00:21:33.975 { 00:21:33.975 "job": "nvme0n1", 00:21:33.975 "core_mask": "0x2", 00:21:33.975 "workload": "randwrite", 00:21:33.975 "status": "finished", 00:21:33.975 "queue_depth": 128, 00:21:33.975 "io_size": 4096, 00:21:33.975 "runtime": 2.007576, 00:21:33.975 "iops": 19179.34862739941, 00:21:33.975 "mibps": 74.91933057577894, 00:21:33.975 "io_failed": 0, 00:21:33.975 "io_timeout": 0, 00:21:33.975 "avg_latency_us": 6661.291076062936, 00:21:33.975 "min_latency_us": 4051.316363636364, 00:21:33.975 "max_latency_us": 14477.498181818182 00:21:33.975 } 00:21:33.975 ], 00:21:33.975 "core_count": 1 00:21:33.975 } 00:21:33.975 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:33.975 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:33.975 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:33.975 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:33.975 | select(.opcode=="crc32c") 00:21:33.975 | "\(.module_name) \(.executed)"' 00:21:33.975 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94668 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94668 ']' 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94668 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94668 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:34.235 killing process with pid 94668 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94668' 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94668 00:21:34.235 Received shutdown signal, test time was about 2.000000 seconds 00:21:34.235 00:21:34.235 Latency(us) 00:21:34.235 [2024-11-26T16:26:59.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.235 [2024-11-26T16:26:59.888Z] =================================================================================================================== 00:21:34.235 [2024-11-26T16:26:59.888Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94668 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94722 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94722 /var/tmp/bperf.sock 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94722 ']' 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.235 16:26:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:34.235 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:34.235 Zero copy mechanism will not be used. 00:21:34.235 [2024-11-26 16:26:59.879079] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:21:34.235 [2024-11-26 16:26:59.879191] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94722 ] 00:21:34.494 [2024-11-26 16:27:00.031040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.494 [2024-11-26 16:27:00.050436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.428 16:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.428 16:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:35.428 16:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:35.428 16:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:35.428 16:27:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:35.688 [2024-11-26 16:27:01.141038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:35.688 16:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:35.688 16:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:35.947 nvme0n1 00:21:35.947 16:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:35.947 16:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:35.947 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:35.947 Zero copy mechanism will not be used. 00:21:35.947 Running I/O for 2 seconds... 00:21:38.260 7053.00 IOPS, 881.62 MiB/s [2024-11-26T16:27:03.913Z] 7085.00 IOPS, 885.62 MiB/s 00:21:38.260 Latency(us) 00:21:38.260 [2024-11-26T16:27:03.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.260 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:38.260 nvme0n1 : 2.00 7082.75 885.34 0.00 0.00 2254.17 1772.45 7626.01 00:21:38.260 [2024-11-26T16:27:03.913Z] =================================================================================================================== 00:21:38.260 [2024-11-26T16:27:03.913Z] Total : 7082.75 885.34 0.00 0.00 2254.17 1772.45 7626.01 00:21:38.260 { 00:21:38.260 "results": [ 00:21:38.260 { 00:21:38.260 "job": "nvme0n1", 00:21:38.260 "core_mask": "0x2", 00:21:38.260 "workload": "randwrite", 00:21:38.260 "status": "finished", 00:21:38.260 "queue_depth": 16, 00:21:38.260 "io_size": 131072, 00:21:38.260 "runtime": 2.003882, 00:21:38.260 "iops": 7082.752377635011, 00:21:38.260 "mibps": 885.3440472043764, 00:21:38.260 "io_failed": 0, 00:21:38.260 "io_timeout": 0, 00:21:38.260 "avg_latency_us": 2254.165307097609, 00:21:38.260 "min_latency_us": 1772.4509090909091, 00:21:38.260 "max_latency_us": 7626.007272727273 00:21:38.260 } 00:21:38.260 ], 00:21:38.260 "core_count": 1 00:21:38.260 } 00:21:38.260 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:38.260 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:38.260 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:38.260 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:38.260 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:38.260 | select(.opcode=="crc32c") 00:21:38.260 | "\(.module_name) \(.executed)"' 00:21:38.260 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:38.260 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:38.260 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:38.260 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:38.260 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94722 00:21:38.260 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94722 ']' 00:21:38.260 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94722 00:21:38.260 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:38.260 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.260 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94722 00:21:38.260 killing process with pid 94722 00:21:38.260 Received shutdown signal, test time was about 2.000000 seconds 00:21:38.260 00:21:38.260 Latency(us) 00:21:38.260 [2024-11-26T16:27:03.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.260 [2024-11-26T16:27:03.913Z] =================================================================================================================== 00:21:38.260 [2024-11-26T16:27:03.913Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.261 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:38.261 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:38.261 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94722' 00:21:38.261 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94722 00:21:38.261 16:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94722 00:21:38.520 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94549 00:21:38.520 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94549 ']' 00:21:38.520 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94549 00:21:38.520 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:38.520 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.520 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94549 00:21:38.520 killing process with pid 94549 00:21:38.520 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:38.520 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:38.520 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94549' 00:21:38.520 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94549 00:21:38.520 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94549 00:21:38.520 00:21:38.520 real 0m15.267s 00:21:38.520 user 0m30.068s 00:21:38.520 sys 0m4.255s 00:21:38.520 ************************************ 00:21:38.520 END TEST nvmf_digest_clean 00:21:38.520 ************************************ 00:21:38.520 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.520 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:38.780 ************************************ 00:21:38.780 START TEST nvmf_digest_error 00:21:38.780 ************************************ 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=94805 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 94805 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94805 ']' 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.780 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:38.780 [2024-11-26 16:27:04.283372] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:21:38.780 [2024-11-26 16:27:04.283472] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.040 [2024-11-26 16:27:04.430136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.040 [2024-11-26 16:27:04.448079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.040 [2024-11-26 16:27:04.448132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.040 [2024-11-26 16:27:04.448158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.040 [2024-11-26 16:27:04.448165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.040 [2024-11-26 16:27:04.448170] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.040 [2024-11-26 16:27:04.448453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:39.040 [2024-11-26 16:27:04.560869] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:39.040 [2024-11-26 16:27:04.595053] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:39.040 null0 00:21:39.040 [2024-11-26 16:27:04.625453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.040 [2024-11-26 16:27:04.649553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94824 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94824 /var/tmp/bperf.sock 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94824 ']' 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:39.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.040 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:39.300 [2024-11-26 16:27:04.713166] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:21:39.300 [2024-11-26 16:27:04.713726] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94824 ] 00:21:39.300 [2024-11-26 16:27:04.861743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.300 [2024-11-26 16:27:04.880463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.300 [2024-11-26 16:27:04.907582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:39.559 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.559 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:39.559 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:39.559 16:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:39.559 16:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:39.559 16:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.559 16:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:39.559 16:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.559 16:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:39.559 16:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:39.818 nvme0n1 00:21:40.078 16:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:40.078 16:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.078 16:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:40.078 16:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.078 16:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:40.078 16:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:40.078 Running I/O for 2 seconds... 00:21:40.078 [2024-11-26 16:27:05.637942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.078 [2024-11-26 16:27:05.638001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.078 [2024-11-26 16:27:05.638015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.078 [2024-11-26 16:27:05.652040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.078 [2024-11-26 16:27:05.652075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.078 [2024-11-26 16:27:05.652103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.078 [2024-11-26 16:27:05.666213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.078 [2024-11-26 16:27:05.666247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.078 [2024-11-26 16:27:05.666275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.078 [2024-11-26 16:27:05.680306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.078 [2024-11-26 16:27:05.680339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.078 [2024-11-26 16:27:05.680394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.078 [2024-11-26 16:27:05.694399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.078 [2024-11-26 16:27:05.694430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.078 [2024-11-26 16:27:05.694458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.078 [2024-11-26 16:27:05.708328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.078 [2024-11-26 16:27:05.708386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.078 [2024-11-26 16:27:05.708414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.078 [2024-11-26 16:27:05.723753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.078 [2024-11-26 16:27:05.723807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.078 [2024-11-26 16:27:05.723837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.338 [2024-11-26 16:27:05.741885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.338 [2024-11-26 16:27:05.741921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-26 16:27:05.741950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.338 [2024-11-26 16:27:05.757890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.338 [2024-11-26 16:27:05.757924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-26 16:27:05.757951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.338 [2024-11-26 16:27:05.772541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.338 [2024-11-26 16:27:05.772575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-26 16:27:05.772603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.338 [2024-11-26 16:27:05.786632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.338 [2024-11-26 16:27:05.786665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-26 16:27:05.786692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.338 [2024-11-26 16:27:05.800621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.338 [2024-11-26 16:27:05.800877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-26 16:27:05.800895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.338 [2024-11-26 16:27:05.814972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.338 [2024-11-26 16:27:05.815136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-26 16:27:05.815168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.338 [2024-11-26 16:27:05.829339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.338 [2024-11-26 16:27:05.829564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-26 16:27:05.829700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.338 [2024-11-26 16:27:05.843936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.338 [2024-11-26 16:27:05.844133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-26 16:27:05.844276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.338 [2024-11-26 16:27:05.858624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.338 [2024-11-26 16:27:05.858856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-26 16:27:05.858979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.338 [2024-11-26 16:27:05.873570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.338 [2024-11-26 16:27:05.873787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-26 16:27:05.873906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.338 [2024-11-26 16:27:05.888091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.338 [2024-11-26 16:27:05.888303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-26 16:27:05.888462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.338 [2024-11-26 16:27:05.902827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.338 [2024-11-26 16:27:05.903040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-26 16:27:05.903166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.338 [2024-11-26 16:27:05.917377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.338 [2024-11-26 16:27:05.917602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.339 [2024-11-26 16:27:05.917737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.339 [2024-11-26 16:27:05.932284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.339 [2024-11-26 16:27:05.932525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.339 [2024-11-26 16:27:05.932644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.339 [2024-11-26 16:27:05.947139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.339 [2024-11-26 16:27:05.947348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.339 [2024-11-26 16:27:05.947487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.339 [2024-11-26 16:27:05.961926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.339 [2024-11-26 16:27:05.962101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.339 [2024-11-26 16:27:05.962133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.339 [2024-11-26 16:27:05.976605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.339 [2024-11-26 16:27:05.976809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.339 [2024-11-26 16:27:05.976827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.598 [2024-11-26 16:27:05.992190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.598 [2024-11-26 16:27:05.992226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.598 [2024-11-26 16:27:05.992255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.598 [2024-11-26 16:27:06.006795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.598 [2024-11-26 16:27:06.006830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.598 [2024-11-26 16:27:06.006857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.598 [2024-11-26 16:27:06.021054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.598 [2024-11-26 16:27:06.021133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.598 [2024-11-26 16:27:06.021160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.598 [2024-11-26 16:27:06.035171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.598 [2024-11-26 16:27:06.035395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.598 [2024-11-26 16:27:06.035413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.598 [2024-11-26 16:27:06.049557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.598 [2024-11-26 16:27:06.049589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.598 [2024-11-26 16:27:06.049616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.598 [2024-11-26 16:27:06.063506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.598 [2024-11-26 16:27:06.063537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.598 [2024-11-26 16:27:06.063565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.598 [2024-11-26 16:27:06.077551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.598 [2024-11-26 16:27:06.077582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.598 [2024-11-26 16:27:06.077609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.598 [2024-11-26 16:27:06.091561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.598 [2024-11-26 16:27:06.091592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.598 [2024-11-26 16:27:06.091619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.598 [2024-11-26 16:27:06.105607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.598 [2024-11-26 16:27:06.105637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.598 [2024-11-26 16:27:06.105664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.598 [2024-11-26 16:27:06.119554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.598 [2024-11-26 16:27:06.119584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.598 [2024-11-26 16:27:06.119612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.598 [2024-11-26 16:27:06.133631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.598 [2024-11-26 16:27:06.133661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.598 [2024-11-26 16:27:06.133688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.598 [2024-11-26 16:27:06.147614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.598 [2024-11-26 16:27:06.147645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.598 [2024-11-26 16:27:06.147671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.598 [2024-11-26 16:27:06.161523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.598 [2024-11-26 16:27:06.161554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.598 [2024-11-26 16:27:06.161581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.598 [2024-11-26 16:27:06.175694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.598 [2024-11-26 16:27:06.175724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.598 [2024-11-26 16:27:06.175751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.598 [2024-11-26 16:27:06.189699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.598 [2024-11-26 16:27:06.189730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.598 [2024-11-26 16:27:06.189758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.598 [2024-11-26 16:27:06.203740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.599 [2024-11-26 16:27:06.203771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.599 [2024-11-26 16:27:06.203798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.599 [2024-11-26 16:27:06.217806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.599 [2024-11-26 16:27:06.217837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.599 [2024-11-26 16:27:06.217863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.599 [2024-11-26 16:27:06.231727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.599 [2024-11-26 16:27:06.231757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.599 [2024-11-26 16:27:06.231784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.858 [2024-11-26 16:27:06.246691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.858 [2024-11-26 16:27:06.246727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.858 [2024-11-26 16:27:06.246754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.858 [2024-11-26 16:27:06.261042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.858 [2024-11-26 16:27:06.261305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.858 [2024-11-26 16:27:06.261321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.858 [2024-11-26 16:27:06.275466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.858 [2024-11-26 16:27:06.275680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.858 [2024-11-26 16:27:06.275806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.858 [2024-11-26 16:27:06.290145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.858 [2024-11-26 16:27:06.290368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.858 [2024-11-26 16:27:06.290497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.858 [2024-11-26 16:27:06.304755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.858 [2024-11-26 16:27:06.304974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.858 [2024-11-26 16:27:06.305132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.858 [2024-11-26 16:27:06.319402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.858 [2024-11-26 16:27:06.319596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.858 [2024-11-26 16:27:06.319755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.858 [2024-11-26 16:27:06.334052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.858 [2024-11-26 16:27:06.334263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.858 [2024-11-26 16:27:06.334439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.858 [2024-11-26 16:27:06.349333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.858 [2024-11-26 16:27:06.349531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.858 [2024-11-26 16:27:06.349700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.858 [2024-11-26 16:27:06.365475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.858 [2024-11-26 16:27:06.365666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.858 [2024-11-26 16:27:06.365787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.858 [2024-11-26 16:27:06.382222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.858 [2024-11-26 16:27:06.382451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.858 [2024-11-26 16:27:06.382627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.858 [2024-11-26 16:27:06.398467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.858 [2024-11-26 16:27:06.398674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.858 [2024-11-26 16:27:06.398861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.858 [2024-11-26 16:27:06.413872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.858 [2024-11-26 16:27:06.414050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.858 [2024-11-26 16:27:06.414243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.858 [2024-11-26 16:27:06.429348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.858 [2024-11-26 16:27:06.429561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.858 [2024-11-26 16:27:06.429696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.858 [2024-11-26 16:27:06.444871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.858 [2024-11-26 16:27:06.445083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.858 [2024-11-26 16:27:06.445227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.858 [2024-11-26 16:27:06.460375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.858 [2024-11-26 16:27:06.460592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.858 [2024-11-26 16:27:06.460732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.859 [2024-11-26 16:27:06.475770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.859 [2024-11-26 16:27:06.475968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.859 [2024-11-26 16:27:06.476102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.859 [2024-11-26 16:27:06.491229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:40.859 [2024-11-26 16:27:06.491443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.859 [2024-11-26 16:27:06.491577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.118 [2024-11-26 16:27:06.507180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.118 [2024-11-26 16:27:06.507399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.118 [2024-11-26 16:27:06.507594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.118 [2024-11-26 16:27:06.523051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.118 [2024-11-26 16:27:06.523088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.118 [2024-11-26 16:27:06.523116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.118 [2024-11-26 16:27:06.538246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.118 [2024-11-26 16:27:06.538280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.118 [2024-11-26 16:27:06.538308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.118 [2024-11-26 16:27:06.553207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.118 [2024-11-26 16:27:06.553251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.118 [2024-11-26 16:27:06.553278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.118 [2024-11-26 16:27:06.573743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.119 [2024-11-26 16:27:06.573775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.119 [2024-11-26 16:27:06.573802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.119 [2024-11-26 16:27:06.587741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.119 [2024-11-26 16:27:06.587774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.119 [2024-11-26 16:27:06.587801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.119 [2024-11-26 16:27:06.601806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.119 [2024-11-26 16:27:06.601840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.119 [2024-11-26 16:27:06.601867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.119 [2024-11-26 16:27:06.615638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.119 [2024-11-26 16:27:06.615671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.119 [2024-11-26 16:27:06.615697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.119 17205.00 IOPS, 67.21 MiB/s [2024-11-26T16:27:06.772Z] [2024-11-26 16:27:06.631057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.119 [2024-11-26 16:27:06.631090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.119 [2024-11-26 16:27:06.631117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.119 [2024-11-26 16:27:06.644993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.119 [2024-11-26 16:27:06.645043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.119 [2024-11-26 16:27:06.645070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.119 [2024-11-26 16:27:06.658982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.119 [2024-11-26 16:27:06.659014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.119 [2024-11-26 16:27:06.659041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.119 [2024-11-26 16:27:06.672972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.119 [2024-11-26 16:27:06.673007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.119 [2024-11-26 16:27:06.673050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.119 [2024-11-26 16:27:06.687460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.119 [2024-11-26 16:27:06.687492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.119 [2024-11-26 16:27:06.687518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.119 [2024-11-26 16:27:06.701634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.119 [2024-11-26 16:27:06.701667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.119 [2024-11-26 16:27:06.701694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.119 [2024-11-26 16:27:06.715659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.119 [2024-11-26 16:27:06.715690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.119 [2024-11-26 16:27:06.715717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.119 [2024-11-26 16:27:06.729806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.119 [2024-11-26 16:27:06.729838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.119 [2024-11-26 16:27:06.729865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.119 [2024-11-26 16:27:06.744671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.119 [2024-11-26 16:27:06.744915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.119 [2024-11-26 16:27:06.744933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.119 [2024-11-26 16:27:06.761805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.119 [2024-11-26 16:27:06.761843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.119 [2024-11-26 16:27:06.761872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:06.778958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:06.778994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:06.779022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:06.794190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:06.794223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:06.794250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:06.808373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:06.808405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:06.808432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:06.822267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:06.822299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:06.822325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:06.836143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:06.836174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:06.836201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:06.850153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:06.850185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:06.850211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:06.864052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:06.864085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:06.864113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:06.878853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:06.878885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:06.878911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:06.892823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:06.892857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:06.892884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:06.906770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:06.906949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:06.906981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:06.920972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:06.921153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:06.921184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:06.935174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:06.935207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:06.935235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:06.949328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:06.949402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:06.949415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:06.963312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:06.963370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:06.963398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:06.977276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:06.977308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:06.977336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:06.991166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:06.991198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:06.991225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:07.005283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:07.005316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:07.005342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.379 [2024-11-26 16:27:07.019162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.379 [2024-11-26 16:27:07.019193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.379 [2024-11-26 16:27:07.019220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.638 [2024-11-26 16:27:07.034452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.638 [2024-11-26 16:27:07.034486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.638 [2024-11-26 16:27:07.034515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.638 [2024-11-26 16:27:07.048441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.638 [2024-11-26 16:27:07.048475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.638 [2024-11-26 16:27:07.048502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.638 [2024-11-26 16:27:07.062446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.638 [2024-11-26 16:27:07.062478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.638 [2024-11-26 16:27:07.062504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.638 [2024-11-26 16:27:07.076210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.638 [2024-11-26 16:27:07.076242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.638 [2024-11-26 16:27:07.076268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.639 [2024-11-26 16:27:07.090265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.639 [2024-11-26 16:27:07.090296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.639 [2024-11-26 16:27:07.090323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.639 [2024-11-26 16:27:07.104229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.639 [2024-11-26 16:27:07.104263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.639 [2024-11-26 16:27:07.104290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.639 [2024-11-26 16:27:07.118135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.639 [2024-11-26 16:27:07.118185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.639 [2024-11-26 16:27:07.118212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.639 [2024-11-26 16:27:07.132090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.639 [2024-11-26 16:27:07.132122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.639 [2024-11-26 16:27:07.132149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.639 [2024-11-26 16:27:07.146133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.639 [2024-11-26 16:27:07.146165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.639 [2024-11-26 16:27:07.146193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.639 [2024-11-26 16:27:07.159997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.639 [2024-11-26 16:27:07.160028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.639 [2024-11-26 16:27:07.160056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.639 [2024-11-26 16:27:07.174612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.639 [2024-11-26 16:27:07.174643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.639 [2024-11-26 16:27:07.174669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.639 [2024-11-26 16:27:07.188321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.639 [2024-11-26 16:27:07.188415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.639 [2024-11-26 16:27:07.188428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.639 [2024-11-26 16:27:07.202763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.639 [2024-11-26 16:27:07.202794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.639 [2024-11-26 16:27:07.202822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.639 [2024-11-26 16:27:07.216678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.639 [2024-11-26 16:27:07.216709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.639 [2024-11-26 16:27:07.216758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.639 [2024-11-26 16:27:07.230735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.639 [2024-11-26 16:27:07.230766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.639 [2024-11-26 16:27:07.230794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.639 [2024-11-26 16:27:07.244761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.639 [2024-11-26 16:27:07.244810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.639 [2024-11-26 16:27:07.244821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.639 [2024-11-26 16:27:07.258752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.639 [2024-11-26 16:27:07.258784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.639 [2024-11-26 16:27:07.258811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.639 [2024-11-26 16:27:07.272807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.639 [2024-11-26 16:27:07.273000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.639 [2024-11-26 16:27:07.273048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.287919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.288118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.288151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.302688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.302888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.302921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.317136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.317313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.317345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.331482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.331693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.331821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.346018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.346230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.346393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.360604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.360811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.360951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.375437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.375650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.375779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.390100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.390313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.390472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.404880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.405101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.405254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.419608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.419819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.419943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.434144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.434365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.434495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.448905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.449153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.449267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.463476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.463509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.463537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.477547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.477579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.477607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.497548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.497581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.497609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.511499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.511530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.511556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.525589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.525622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.525649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.898 [2024-11-26 16:27:07.539508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:41.898 [2024-11-26 16:27:07.539539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.898 [2024-11-26 16:27:07.539566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.157 [2024-11-26 16:27:07.554957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:42.157 [2024-11-26 16:27:07.554994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.157 [2024-11-26 16:27:07.555023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.157 [2024-11-26 16:27:07.571120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:42.157 [2024-11-26 16:27:07.571158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.157 [2024-11-26 16:27:07.571187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.157 [2024-11-26 16:27:07.587547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:42.157 [2024-11-26 16:27:07.587581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.157 [2024-11-26 16:27:07.587609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.157 [2024-11-26 16:27:07.602752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:42.157 [2024-11-26 16:27:07.602801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.157 [2024-11-26 16:27:07.602828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.157 [2024-11-26 16:27:07.619199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcfcc80) 00:21:42.157 [2024-11-26 16:27:07.619233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.157 [2024-11-26 16:27:07.619260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.157 17331.50 IOPS, 67.70 MiB/s 00:21:42.157 Latency(us) 00:21:42.157 [2024-11-26T16:27:07.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.157 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:42.157 nvme0n1 : 2.01 17365.67 67.83 0.00 0.00 7365.51 6613.18 27167.65 00:21:42.157 [2024-11-26T16:27:07.810Z] =================================================================================================================== 00:21:42.157 [2024-11-26T16:27:07.810Z] Total : 17365.67 67.83 0.00 0.00 7365.51 6613.18 27167.65 00:21:42.157 { 00:21:42.157 "results": [ 00:21:42.157 { 00:21:42.157 "job": "nvme0n1", 00:21:42.157 "core_mask": "0x2", 00:21:42.158 "workload": "randread", 00:21:42.158 "status": "finished", 00:21:42.158 "queue_depth": 128, 00:21:42.158 "io_size": 4096, 00:21:42.158 "runtime": 2.010691, 00:21:42.158 "iops": 17365.67180138569, 00:21:42.158 "mibps": 67.83465547416286, 00:21:42.158 "io_failed": 0, 00:21:42.158 "io_timeout": 0, 00:21:42.158 "avg_latency_us": 7365.507358905665, 00:21:42.158 "min_latency_us": 6613.178181818182, 00:21:42.158 "max_latency_us": 27167.65090909091 00:21:42.158 } 00:21:42.158 ], 00:21:42.158 "core_count": 1 00:21:42.158 } 00:21:42.158 16:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:42.158 16:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:42.158 16:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:42.158 16:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:42.158 | .driver_specific 00:21:42.158 | .nvme_error 00:21:42.158 | .status_code 00:21:42.158 | .command_transient_transport_error' 00:21:42.416 16:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:21:42.416 16:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94824 00:21:42.416 16:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94824 ']' 00:21:42.416 16:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94824 00:21:42.416 16:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:42.416 16:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.416 16:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94824 00:21:42.416 16:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:42.416 killing process with pid 94824 00:21:42.416 Received shutdown signal, test time was about 2.000000 seconds 00:21:42.416 00:21:42.416 Latency(us) 00:21:42.416 [2024-11-26T16:27:08.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.416 [2024-11-26T16:27:08.069Z] =================================================================================================================== 00:21:42.416 [2024-11-26T16:27:08.069Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.416 16:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:42.416 16:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94824' 00:21:42.416 16:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94824 00:21:42.416 16:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94824 00:21:42.416 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:42.416 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:42.416 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:42.416 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:42.416 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:42.416 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94877 00:21:42.416 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94877 /var/tmp/bperf.sock 00:21:42.416 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:42.416 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94877 ']' 00:21:42.416 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:42.416 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.417 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:42.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:42.417 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.417 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:42.675 [2024-11-26 16:27:08.113422] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:21:42.675 [2024-11-26 16:27:08.113692] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94877 ] 00:21:42.675 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:42.675 Zero copy mechanism will not be used. 00:21:42.675 [2024-11-26 16:27:08.257034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.675 [2024-11-26 16:27:08.275877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.675 [2024-11-26 16:27:08.303060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:42.934 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:42.934 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:42.934 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:42.934 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:43.192 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:43.192 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.192 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:43.192 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.192 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:43.192 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:43.451 nvme0n1 00:21:43.451 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:43.451 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.451 16:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:43.451 16:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.451 16:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:43.451 16:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:43.451 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:43.451 Zero copy mechanism will not be used. 00:21:43.451 Running I/O for 2 seconds... 00:21:43.451 [2024-11-26 16:27:09.097722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.451 [2024-11-26 16:27:09.097786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.451 [2024-11-26 16:27:09.097800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.716 [2024-11-26 16:27:09.102259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.716 [2024-11-26 16:27:09.102295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.716 [2024-11-26 16:27:09.102325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.716 [2024-11-26 16:27:09.106142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.716 [2024-11-26 16:27:09.106176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.716 [2024-11-26 16:27:09.106205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.716 [2024-11-26 16:27:09.109995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.716 [2024-11-26 16:27:09.110029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.716 [2024-11-26 16:27:09.110058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.113784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.113817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.113847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.117808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.117841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.117870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.121802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.121835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.121864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.125587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.125619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.125648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.129351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.129577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.129610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.133369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.133559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.133591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.137406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.137449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.137478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.141204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.141428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.141446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.145239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.145443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.145476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.149309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.149500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.149533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.153414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.153445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.153474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.157164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.157370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.157404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.161104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.161261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.161294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.165184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.165387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.165405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.169300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.169487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.169521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.173426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.173630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.173647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.177447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.177655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.177688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.181459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.181640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.181679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.185593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.185626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.185656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.189409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.189440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.189468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.193118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.193311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.193327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.197011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.197064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.197093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.200868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.200904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.200934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.204715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.204773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.204803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.208445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.208478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.208507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.212224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.212408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.212441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.216214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.216415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.216448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.219968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.219997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.220025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.223709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.223885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.223918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.227740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.227773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.227802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.231440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.231471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.231500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.235228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.235412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.235445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.239269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.239475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.239509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.243463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.243495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.243523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.247228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.247411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.247445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.251294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.251500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.251533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.255320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.255531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.255564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.259367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.259398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.259426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.263100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.263276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.263309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.267125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.267303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.267335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.271193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.271393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.271410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.275148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.275323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.275369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.279146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.279320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.279354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.283100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.283275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.283307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.287009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.287202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.287218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.291041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.291216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.291248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.295037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.295212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.295245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.299108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.299283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.299316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.303189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.303407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.303425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.307310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.307514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.307531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.717 [2024-11-26 16:27:09.311295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.717 [2024-11-26 16:27:09.311502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.717 [2024-11-26 16:27:09.311535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.718 [2024-11-26 16:27:09.315273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.718 [2024-11-26 16:27:09.315493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.718 [2024-11-26 16:27:09.315512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.718 [2024-11-26 16:27:09.319261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.718 [2024-11-26 16:27:09.319446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.718 [2024-11-26 16:27:09.319479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.718 [2024-11-26 16:27:09.323214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.718 [2024-11-26 16:27:09.323418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.718 [2024-11-26 16:27:09.323451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.718 [2024-11-26 16:27:09.327244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.718 [2024-11-26 16:27:09.327449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.718 [2024-11-26 16:27:09.327482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.718 [2024-11-26 16:27:09.331257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.718 [2024-11-26 16:27:09.331442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.718 [2024-11-26 16:27:09.331474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.718 [2024-11-26 16:27:09.335292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.718 [2024-11-26 16:27:09.335497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.718 [2024-11-26 16:27:09.335531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.718 [2024-11-26 16:27:09.339288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.718 [2024-11-26 16:27:09.339493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.718 [2024-11-26 16:27:09.339525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.718 [2024-11-26 16:27:09.343317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.718 [2024-11-26 16:27:09.343504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.718 [2024-11-26 16:27:09.343537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.718 [2024-11-26 16:27:09.347268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.718 [2024-11-26 16:27:09.347472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.718 [2024-11-26 16:27:09.347505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.718 [2024-11-26 16:27:09.351321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.718 [2024-11-26 16:27:09.351530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.718 [2024-11-26 16:27:09.351562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.718 [2024-11-26 16:27:09.355606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.718 [2024-11-26 16:27:09.355641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.718 [2024-11-26 16:27:09.355670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.718 [2024-11-26 16:27:09.359759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.718 [2024-11-26 16:27:09.359794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.718 [2024-11-26 16:27:09.359823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.363624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.363661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.363690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.367657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.367720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.367741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.371570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.371604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.371632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.375351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.375382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.375410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.379121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.379315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.379331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.383160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.383353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.383381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.387138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.387314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.387347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.391100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.391289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.391306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.395281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.395479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.395512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.399411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.399446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.399474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.403248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.403468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.403485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.407392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.407424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.407451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.411126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.411317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.411333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.415129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.415320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.415336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.419108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.419283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.419316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.423129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.423305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.423337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.427136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.427314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.427331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.431214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.977 [2024-11-26 16:27:09.431457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.977 [2024-11-26 16:27:09.431634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.977 [2024-11-26 16:27:09.435430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.435639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.435766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.439582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.439781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.439897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.444090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.444294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.444451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.448229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.448461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.448597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.452484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.452686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.452836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.456892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.457072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.457233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.460993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.461207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.461410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.465400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.465632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.465779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.469547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.469758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.469885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.473676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.473881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.473997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.477923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.477956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.477984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.481800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.481834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.481862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.485538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.485570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.485598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.489289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.489321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.489349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.493114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.493163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.493192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.496905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.496939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.496952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.500639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.500672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.500700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.504305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.504338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.504410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.508154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.508186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.508215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.511935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.511967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.511994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.515682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.515714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.515742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.519430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.519461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.519489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.523173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.523394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.523412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.527171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.527371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.527388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.531134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.531309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.531343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.535150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.535343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.535390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.539160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.539350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.978 [2024-11-26 16:27:09.539376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.978 [2024-11-26 16:27:09.543057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.978 [2024-11-26 16:27:09.543232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.543264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.547112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.547306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.547322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.551162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.551354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.551380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.555128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.555319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.555336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.559119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.559296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.559327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.563130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.563305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.563339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.567089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.567280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.567296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.571209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.571431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.571449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.575231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.575413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.575445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.579229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.579411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.579443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.583316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.583542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.583559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.587328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.587511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.587543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.591399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.591431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.591459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.595185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.595403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.595421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.599260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.599479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.599496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.603313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.603498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.603530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.607367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.607399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.607428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.611162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.611352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.611398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.615210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.615409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.615426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.979 [2024-11-26 16:27:09.619300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:43.979 [2024-11-26 16:27:09.619495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.979 [2024-11-26 16:27:09.619512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.624099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.624137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.624167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.628048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.628082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.628109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.632235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.632270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.632298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.636094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.636128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.636156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.639963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.639996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.640024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.643773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.643805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.643833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.647560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.647593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.647622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.651401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.651434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.651463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.655199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.655421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.655454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.659264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.659487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.659504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.663210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.663240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.663268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.667069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.667265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.667458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.671472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.671647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.671803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.675586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.675816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.675945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.679846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.680045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.680173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.684141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.684344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.684550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.688449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.688649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.688874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.692751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.692935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.693082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.697219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.697463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.697629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.701599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.701789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.701921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.705942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.706153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.706280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.710245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.710422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.710454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.714250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.714286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.240 [2024-11-26 16:27:09.714315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.240 [2024-11-26 16:27:09.718073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.240 [2024-11-26 16:27:09.718105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.718134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.721981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.722013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.722042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.725819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.725851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.725879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.729660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.729693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.729720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.733480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.733512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.733540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.737272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.737304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.737332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.741023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.741073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.741086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.744882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.744916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.744945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.748707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.748782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.748796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.754151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.754425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.754684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.760193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.760229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.760243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.766090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.766142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.766171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.770427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.770475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.770487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.774236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.774282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.774293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.778074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.778119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.778130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.781861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.781906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.781917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.785719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.785764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.785776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.789509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.789554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.789565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.793229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.793273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.793284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.796969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.797000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.797011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.800785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.800831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.800843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.804503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.804559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.804570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.808340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.808396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.808407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.812187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.812248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.812260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.816601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.816647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.816658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.820778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.820825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.820837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.825253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.825299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.241 [2024-11-26 16:27:09.825311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.241 [2024-11-26 16:27:09.829559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.241 [2024-11-26 16:27:09.829602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.242 [2024-11-26 16:27:09.829615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.242 [2024-11-26 16:27:09.834235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.242 [2024-11-26 16:27:09.834283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.242 [2024-11-26 16:27:09.834296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.242 [2024-11-26 16:27:09.838764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.242 [2024-11-26 16:27:09.838810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.242 [2024-11-26 16:27:09.838821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.242 [2024-11-26 16:27:09.843004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.242 [2024-11-26 16:27:09.843049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.242 [2024-11-26 16:27:09.843060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.242 [2024-11-26 16:27:09.847159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.242 [2024-11-26 16:27:09.847203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.242 [2024-11-26 16:27:09.847215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.242 [2024-11-26 16:27:09.851463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.242 [2024-11-26 16:27:09.851511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.242 [2024-11-26 16:27:09.851523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.242 [2024-11-26 16:27:09.855685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.242 [2024-11-26 16:27:09.855746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.242 [2024-11-26 16:27:09.855773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.242 [2024-11-26 16:27:09.859740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.242 [2024-11-26 16:27:09.859801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.242 [2024-11-26 16:27:09.859812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.242 [2024-11-26 16:27:09.863679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.242 [2024-11-26 16:27:09.863714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.242 [2024-11-26 16:27:09.863724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.242 [2024-11-26 16:27:09.867459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.242 [2024-11-26 16:27:09.867519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.242 [2024-11-26 16:27:09.867530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.242 [2024-11-26 16:27:09.871400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.242 [2024-11-26 16:27:09.871445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.242 [2024-11-26 16:27:09.871457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.242 [2024-11-26 16:27:09.875373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.242 [2024-11-26 16:27:09.875426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.242 [2024-11-26 16:27:09.875437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.242 [2024-11-26 16:27:09.879285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.242 [2024-11-26 16:27:09.879330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.242 [2024-11-26 16:27:09.879341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.242 [2024-11-26 16:27:09.883463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.242 [2024-11-26 16:27:09.883513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.242 [2024-11-26 16:27:09.883525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.887828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.887878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.887890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.891860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.891925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.891937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.895854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.895902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.895913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.899632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.899678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.899689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.903377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.903422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.903433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.907171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.907216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.907227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.911045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.911091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.911102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.914926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.914971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.914982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.918730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.918773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.918784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.922563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.922607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.922618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.926286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.926330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.926341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.930039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.930083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.930094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.933946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.933991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.934002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.937816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.937861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.937872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.941869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.941914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.941925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.945918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.945963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.945974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.949982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.950029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.950040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.954112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.954175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.954188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.958558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.958604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.958616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.962920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.962967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.962978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.967283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.967328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.967340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.971465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.971511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.971522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.975559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.975605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.975616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.501 [2024-11-26 16:27:09.979685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.501 [2024-11-26 16:27:09.979716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.501 [2024-11-26 16:27:09.979727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:09.983709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:09.983754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:09.983765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:09.987672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:09.987720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:09.987732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:09.991715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:09.991762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:09.991774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:09.995583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:09.995628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:09.995639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:09.999644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:09.999679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:09.999694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.003819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.003867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.003879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.008111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.008157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.008169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.012053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.012099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.012111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.016130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.016177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.016189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.020589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.020636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.020648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.024886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.024920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.024933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.029103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.029162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.029173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.033170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.033215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.033227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.037162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.037207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.037218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.041131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.041166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.041179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.045201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.045246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.045257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.049417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.049470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.049482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.053383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.053438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.053450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.057220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.057265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.057277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.061178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.061222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.061233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.065126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.065185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.065196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.069237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.069282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.069293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.073269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.073314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.073326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.077152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.077198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.077209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.081096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.081170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.081182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.085068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.085129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.085154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.089205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.089250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.089261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.502 7657.00 IOPS, 957.12 MiB/s [2024-11-26T16:27:10.155Z] [2024-11-26 16:27:10.094171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.094216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.094227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.098101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.098146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.098158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.102078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.102122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.102134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.106241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.106287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.106300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.110180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.110226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.110237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.114028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.114073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.114085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.118049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.118093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.118104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.122018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.122063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.122074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.125971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.126018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.126029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.129910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.129955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.129966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.133796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.133841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.133852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.137638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.137683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.137694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.141540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.141584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.141595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.502 [2024-11-26 16:27:10.145745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.502 [2024-11-26 16:27:10.145792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.502 [2024-11-26 16:27:10.145805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.150104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.150153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.150164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.154461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.154509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.154521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.158392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.158437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.158449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.162466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.162511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.162523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.166439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.166484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.166496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.170325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.170394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.170406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.174230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.174275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.174286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.178091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.178135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.178146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.181948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.181992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.182004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.185721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.185765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.185776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.189567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.189611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.189622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.193364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.193419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.193430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.197175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.197219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.197230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.200966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.201014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.201027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.204939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.204985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.204996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.208772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.208816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.208827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.212466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.212510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.212520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.216166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.216210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.216221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.219920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.219964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.219975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.223719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.223764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.223775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.227463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.763 [2024-11-26 16:27:10.227506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.763 [2024-11-26 16:27:10.227517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.763 [2024-11-26 16:27:10.231207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.231250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.231261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.235056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.235099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.235111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.238915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.238960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.238971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.242810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.242855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.242866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.246673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.246718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.246729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.250452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.250496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.250507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.254248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.254292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.254303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.258129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.258173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.258184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.262104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.262150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.262161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.266011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.266056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.266067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.269830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.269875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.269886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.273646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.273690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.273702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.277447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.277492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.277503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.281222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.281267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.281278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.285069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.285130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.285157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.288835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.288881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.288892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.292513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.292557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.292568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.296304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.296347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.296369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.300078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.300123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.300133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.303923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.303967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.303978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.307778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.307822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.307834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.311602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.311646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.311657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.315421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.315464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.315475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.319178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.319222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.319233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.323082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.323126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.323137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.327383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.327430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.327442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.331188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.764 [2024-11-26 16:27:10.331234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.764 [2024-11-26 16:27:10.331245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.764 [2024-11-26 16:27:10.335090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.335135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.335147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.338959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.339003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.339014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.342734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.342795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.342806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.346508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.346552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.346563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.350173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.350219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.350230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.354068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.354113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.354124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.357935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.357979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.357990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.361755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.361799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.361811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.365557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.365601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.365612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.369332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.369384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.369396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.373045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.373105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.373131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.376897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.376942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.376954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.380867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.380913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.380925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.384683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.384750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.384778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.388482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.388525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.388536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.392152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.392196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.392207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.395992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.396037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.396048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.399780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.399824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.399835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.403580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.403624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.403634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:44.765 [2024-11-26 16:27:10.407773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:44.765 [2024-11-26 16:27:10.407822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.765 [2024-11-26 16:27:10.407849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.026 [2024-11-26 16:27:10.411904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.026 [2024-11-26 16:27:10.411951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.026 [2024-11-26 16:27:10.411962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.026 [2024-11-26 16:27:10.416001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.026 [2024-11-26 16:27:10.416050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.026 [2024-11-26 16:27:10.416063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.026 [2024-11-26 16:27:10.419841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.026 [2024-11-26 16:27:10.419887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.026 [2024-11-26 16:27:10.419898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.026 [2024-11-26 16:27:10.423684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.026 [2024-11-26 16:27:10.423729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.026 [2024-11-26 16:27:10.423741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.026 [2024-11-26 16:27:10.427425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.026 [2024-11-26 16:27:10.427469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.026 [2024-11-26 16:27:10.427480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.026 [2024-11-26 16:27:10.431225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.026 [2024-11-26 16:27:10.431270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.026 [2024-11-26 16:27:10.431281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.026 [2024-11-26 16:27:10.435127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.026 [2024-11-26 16:27:10.435172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.026 [2024-11-26 16:27:10.435184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.026 [2024-11-26 16:27:10.438966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.026 [2024-11-26 16:27:10.439011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.026 [2024-11-26 16:27:10.439022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.026 [2024-11-26 16:27:10.442795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.026 [2024-11-26 16:27:10.442839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.026 [2024-11-26 16:27:10.442850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.026 [2024-11-26 16:27:10.446538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.026 [2024-11-26 16:27:10.446583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.026 [2024-11-26 16:27:10.446594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.026 [2024-11-26 16:27:10.450182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.026 [2024-11-26 16:27:10.450227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.026 [2024-11-26 16:27:10.450238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.026 [2024-11-26 16:27:10.454107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.026 [2024-11-26 16:27:10.454152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.026 [2024-11-26 16:27:10.454164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.026 [2024-11-26 16:27:10.457988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.026 [2024-11-26 16:27:10.458032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.026 [2024-11-26 16:27:10.458044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.026 [2024-11-26 16:27:10.461771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.026 [2024-11-26 16:27:10.461815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.026 [2024-11-26 16:27:10.461826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.026 [2024-11-26 16:27:10.465553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.026 [2024-11-26 16:27:10.465596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.026 [2024-11-26 16:27:10.465607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.026 [2024-11-26 16:27:10.469293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.026 [2024-11-26 16:27:10.469337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.026 [2024-11-26 16:27:10.469348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.473070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.473115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.473141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.476852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.476898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.476910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.480593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.480637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.480648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.484376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.484420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.484431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.488100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.488145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.488156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.491902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.491946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.491957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.495809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.495854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.495865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.499627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.499671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.499682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.503531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.503576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.503587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.507341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.507395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.507407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.511157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.511201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.511212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.514981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.515026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.515037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.518740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.518801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.518812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.522595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.522640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.522651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.526397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.526441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.526452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.530126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.530170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.530181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.534053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.534098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.534109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.537913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.537958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.537968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.541712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.541755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.541766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.545535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.545579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.545590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.549277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.549322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.549332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.553051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.553109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.553136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.556845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.556892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.556903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.560556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.560600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.560611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.564296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.564340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.564351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.567990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.568035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.568046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.571811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.027 [2024-11-26 16:27:10.571856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.027 [2024-11-26 16:27:10.571867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.027 [2024-11-26 16:27:10.575589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.575634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.575644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.579423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.579466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.579477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.583151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.583194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.583205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.586960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.587004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.587015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.590805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.590848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.590859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.594544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.594588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.594600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.598231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.598275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.598286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.602122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.602165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.602177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.605977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.606021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.606032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.609822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.609866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.609877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.613670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.613715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.613725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.617423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.617467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.617477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.621125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.621168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.621180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.624884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.624928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.624939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.628610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.628654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.628665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.632390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.632433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.632444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.636116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.636160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.636170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.639953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.639998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.640008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.643747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.643791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.643803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.647591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.647635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.647646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.651307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.651351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.651375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.655087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.655132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.655142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.658983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.659027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.659039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.662791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.662834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.662845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.666540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.666584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.666595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.028 [2024-11-26 16:27:10.670908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.028 [2024-11-26 16:27:10.670955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.028 [2024-11-26 16:27:10.670967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.675062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.675108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.675120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.679208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.679257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.679284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.683239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.683284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.683295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.687074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.687119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.687131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.690875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.690920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.690931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.694630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.694675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.694687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.698394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.698438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.698449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.702258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.702303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.702314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.706039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.706083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.706094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.709917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.709962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.709973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.713730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.713776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.713787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.717524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.717567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.717578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.721345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.721397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.721408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.725012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.725058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.725084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.728845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.728893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.728907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.732528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.732571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.732581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.736398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.736442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.736453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.740220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.740263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.740274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.744059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.744103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.744115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.747884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.747928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.747939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.751660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.751704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.751732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.755556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.755602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.755613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.759202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.759246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.759256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.762954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.762998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.763008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.766781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.766826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.766837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.770569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.290 [2024-11-26 16:27:10.770613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.290 [2024-11-26 16:27:10.770624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.290 [2024-11-26 16:27:10.774331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.774384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.774396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.778039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.778084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.778095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.781889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.781933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.781944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.785672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.785717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.785728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.789384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.789436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.789448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.793101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.793161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.793172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.796835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.796880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.796892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.800638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.800682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.800694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.804445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.804487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.804498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.808177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.808222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.808233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.812008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.812053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.812064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.815802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.815846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.815857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.819573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.819618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.819629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.823348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.823416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.823428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.827141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.827186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.827198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.830982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.831026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.831037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.834821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.834865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.834876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.839110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.839156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.839168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.843319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.843390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.843403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.847848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.847894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.847921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.852126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.852172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.852184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.856867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.856913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.856926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.861339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.861429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.861442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.865757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.865833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.865844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.870208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.870253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.870264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.874495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.874541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.874553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.291 [2024-11-26 16:27:10.878649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.291 [2024-11-26 16:27:10.878693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.291 [2024-11-26 16:27:10.878704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.292 [2024-11-26 16:27:10.882622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.292 [2024-11-26 16:27:10.882666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.292 [2024-11-26 16:27:10.882677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.292 [2024-11-26 16:27:10.886597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.292 [2024-11-26 16:27:10.886640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.292 [2024-11-26 16:27:10.886651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.292 [2024-11-26 16:27:10.890325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.292 [2024-11-26 16:27:10.890377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.292 [2024-11-26 16:27:10.890388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.292 [2024-11-26 16:27:10.894110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.292 [2024-11-26 16:27:10.894154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.292 [2024-11-26 16:27:10.894165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.292 [2024-11-26 16:27:10.897967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.292 [2024-11-26 16:27:10.898012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.292 [2024-11-26 16:27:10.898022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.292 [2024-11-26 16:27:10.901928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.292 [2024-11-26 16:27:10.901973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.292 [2024-11-26 16:27:10.901984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.292 [2024-11-26 16:27:10.905692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.292 [2024-11-26 16:27:10.905736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.292 [2024-11-26 16:27:10.905747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.292 [2024-11-26 16:27:10.909461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.292 [2024-11-26 16:27:10.909504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.292 [2024-11-26 16:27:10.909515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.292 [2024-11-26 16:27:10.913034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.292 [2024-11-26 16:27:10.913079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.292 [2024-11-26 16:27:10.913090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.292 [2024-11-26 16:27:10.916780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.292 [2024-11-26 16:27:10.916825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.292 [2024-11-26 16:27:10.916836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.292 [2024-11-26 16:27:10.920468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.292 [2024-11-26 16:27:10.920513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.292 [2024-11-26 16:27:10.920524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.292 [2024-11-26 16:27:10.924226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.292 [2024-11-26 16:27:10.924270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.292 [2024-11-26 16:27:10.924281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.292 [2024-11-26 16:27:10.927902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.292 [2024-11-26 16:27:10.927946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.292 [2024-11-26 16:27:10.927957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.292 [2024-11-26 16:27:10.931965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.292 [2024-11-26 16:27:10.932029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.292 [2024-11-26 16:27:10.932040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.552 [2024-11-26 16:27:10.936395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.552 [2024-11-26 16:27:10.936456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.552 [2024-11-26 16:27:10.936470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.552 [2024-11-26 16:27:10.940372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.552 [2024-11-26 16:27:10.940417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:10.940428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:10.944483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:10.944532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:10.944544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:10.948232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:10.948277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:10.948288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:10.952105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:10.952151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:10.952163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:10.955914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:10.955958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:10.955969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:10.959774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:10.959819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:10.959829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:10.963553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:10.963597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:10.963609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:10.967278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:10.967323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:10.967334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:10.971107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:10.971153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:10.971165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:10.975056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:10.975101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:10.975112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:10.978940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:10.978984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:10.978995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:10.982735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:10.982781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:10.982791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:10.986544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:10.986587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:10.986599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:10.990254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:10.990299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:10.990309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:10.994023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:10.994068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:10.994080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:10.997804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:10.997848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:10.997858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:11.001572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:11.001617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:11.001628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:11.005317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:11.005372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:11.005384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:11.009153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:11.009196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:11.009206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:11.012817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:11.012862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:11.012874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:11.016512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:11.016556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:11.016567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:11.020196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:11.020240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:11.020251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:11.024069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:11.024114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:11.024125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:11.027964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:11.028009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:11.028020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:11.031749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:11.031795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:11.031806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:11.035548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:11.035592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:11.035603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:11.039204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.553 [2024-11-26 16:27:11.039249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.553 [2024-11-26 16:27:11.039259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.553 [2024-11-26 16:27:11.042944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.554 [2024-11-26 16:27:11.042988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.554 [2024-11-26 16:27:11.042998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.554 [2024-11-26 16:27:11.046764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.554 [2024-11-26 16:27:11.046809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.554 [2024-11-26 16:27:11.046820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.554 [2024-11-26 16:27:11.050623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.554 [2024-11-26 16:27:11.050667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.554 [2024-11-26 16:27:11.050678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.554 [2024-11-26 16:27:11.054431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.554 [2024-11-26 16:27:11.054476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.554 [2024-11-26 16:27:11.054486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.554 [2024-11-26 16:27:11.058116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.554 [2024-11-26 16:27:11.058160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.554 [2024-11-26 16:27:11.058171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.554 [2024-11-26 16:27:11.061923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.554 [2024-11-26 16:27:11.061967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.554 [2024-11-26 16:27:11.061977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.554 [2024-11-26 16:27:11.065714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.554 [2024-11-26 16:27:11.065758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.554 [2024-11-26 16:27:11.065769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.554 [2024-11-26 16:27:11.069429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.554 [2024-11-26 16:27:11.069472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.554 [2024-11-26 16:27:11.069483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.554 [2024-11-26 16:27:11.073165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.554 [2024-11-26 16:27:11.073209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.554 [2024-11-26 16:27:11.073220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.554 [2024-11-26 16:27:11.076923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.554 [2024-11-26 16:27:11.076969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.554 [2024-11-26 16:27:11.076981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.554 [2024-11-26 16:27:11.080691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.554 [2024-11-26 16:27:11.080758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.554 [2024-11-26 16:27:11.080787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.554 [2024-11-26 16:27:11.084385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.554 [2024-11-26 16:27:11.084430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.554 [2024-11-26 16:27:11.084441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.554 [2024-11-26 16:27:11.088019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.554 [2024-11-26 16:27:11.088063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.554 [2024-11-26 16:27:11.088074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.554 7843.00 IOPS, 980.38 MiB/s [2024-11-26T16:27:11.207Z] [2024-11-26 16:27:11.093258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe08d60) 00:21:45.554 [2024-11-26 16:27:11.093288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.554 [2024-11-26 16:27:11.093299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.554 00:21:45.554 Latency(us) 00:21:45.554 [2024-11-26T16:27:11.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.554 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:45.554 nvme0n1 : 2.00 7842.87 980.36 0.00 0.00 2036.94 1638.40 8579.26 00:21:45.554 [2024-11-26T16:27:11.207Z] =================================================================================================================== 00:21:45.554 [2024-11-26T16:27:11.207Z] Total : 7842.87 980.36 0.00 0.00 2036.94 1638.40 8579.26 00:21:45.554 { 00:21:45.554 "results": [ 00:21:45.554 { 00:21:45.554 "job": "nvme0n1", 00:21:45.554 "core_mask": "0x2", 00:21:45.554 "workload": "randread", 00:21:45.554 "status": "finished", 00:21:45.554 "queue_depth": 16, 00:21:45.554 "io_size": 131072, 00:21:45.554 "runtime": 2.002074, 00:21:45.554 "iops": 7842.866946975986, 00:21:45.554 "mibps": 980.3583683719983, 00:21:45.554 "io_failed": 0, 00:21:45.554 "io_timeout": 0, 00:21:45.554 "avg_latency_us": 2036.9396000509491, 00:21:45.554 "min_latency_us": 1638.4, 00:21:45.554 "max_latency_us": 8579.258181818182 00:21:45.554 } 00:21:45.554 ], 00:21:45.554 "core_count": 1 00:21:45.554 } 00:21:45.554 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:45.554 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:45.554 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:45.554 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:45.554 | .driver_specific 00:21:45.554 | .nvme_error 00:21:45.554 | .status_code 00:21:45.554 | .command_transient_transport_error' 00:21:45.813 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 507 > 0 )) 00:21:45.813 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94877 00:21:45.813 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94877 ']' 00:21:45.813 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94877 00:21:45.813 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:45.813 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.813 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94877 00:21:45.813 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:45.813 killing process with pid 94877 00:21:45.813 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:45.813 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94877' 00:21:45.813 Received shutdown signal, test time was about 2.000000 seconds 00:21:45.813 00:21:45.813 Latency(us) 00:21:45.813 [2024-11-26T16:27:11.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.813 [2024-11-26T16:27:11.466Z] =================================================================================================================== 00:21:45.813 [2024-11-26T16:27:11.466Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.813 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94877 00:21:45.813 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94877 00:21:46.073 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:46.073 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:46.073 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:46.073 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:46.073 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:46.073 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94924 00:21:46.073 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94924 /var/tmp/bperf.sock 00:21:46.073 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:46.073 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94924 ']' 00:21:46.073 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:46.073 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:46.073 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:46.073 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.073 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:46.073 [2024-11-26 16:27:11.597153] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:21:46.073 [2024-11-26 16:27:11.597266] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94924 ] 00:21:46.332 [2024-11-26 16:27:11.739576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.332 [2024-11-26 16:27:11.757970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.332 [2024-11-26 16:27:11.785028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:46.332 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.332 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:46.332 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:46.332 16:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:46.590 16:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:46.590 16:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.590 16:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:46.590 16:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.590 16:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:46.590 16:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:46.849 nvme0n1 00:21:46.849 16:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:46.849 16:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.849 16:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:46.849 16:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.849 16:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:46.849 16:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:47.108 Running I/O for 2 seconds... 00:21:47.108 [2024-11-26 16:27:12.600760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fb048 00:21:47.108 [2024-11-26 16:27:12.602198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.108 [2024-11-26 16:27:12.602251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.108 [2024-11-26 16:27:12.615031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fb8b8 00:21:47.108 [2024-11-26 16:27:12.616401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.108 [2024-11-26 16:27:12.616478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.108 [2024-11-26 16:27:12.628651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fc128 00:21:47.108 [2024-11-26 16:27:12.630017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.108 [2024-11-26 16:27:12.630064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:47.108 [2024-11-26 16:27:12.642242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fc998 00:21:47.108 [2024-11-26 16:27:12.643567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.108 [2024-11-26 16:27:12.643616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:47.108 [2024-11-26 16:27:12.655679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fd208 00:21:47.108 [2024-11-26 16:27:12.657117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.108 [2024-11-26 16:27:12.657180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:47.108 [2024-11-26 16:27:12.669343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fda78 00:21:47.108 [2024-11-26 16:27:12.670598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.108 [2024-11-26 16:27:12.670644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:47.108 [2024-11-26 16:27:12.682860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fe2e8 00:21:47.108 [2024-11-26 16:27:12.684131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.108 [2024-11-26 16:27:12.684176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:47.108 [2024-11-26 16:27:12.696148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166feb58 00:21:47.108 [2024-11-26 16:27:12.697514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.108 [2024-11-26 16:27:12.697560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:47.108 [2024-11-26 16:27:12.715164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fef90 00:21:47.108 [2024-11-26 16:27:12.717540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.108 [2024-11-26 16:27:12.717584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:47.108 [2024-11-26 16:27:12.728607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166feb58 00:21:47.108 [2024-11-26 16:27:12.730814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.108 [2024-11-26 16:27:12.730859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:47.108 [2024-11-26 16:27:12.742076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fe2e8 00:21:47.108 [2024-11-26 16:27:12.744303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.108 [2024-11-26 16:27:12.744333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.756288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fda78 00:21:47.368 [2024-11-26 16:27:12.758900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.758967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.770736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fd208 00:21:47.368 [2024-11-26 16:27:12.773043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.773123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.784472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fc998 00:21:47.368 [2024-11-26 16:27:12.786629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.786675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.797891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fc128 00:21:47.368 [2024-11-26 16:27:12.800013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.800059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.811462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fb8b8 00:21:47.368 [2024-11-26 16:27:12.813624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.813669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.825089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fb048 00:21:47.368 [2024-11-26 16:27:12.827324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.827376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.838959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fa7d8 00:21:47.368 [2024-11-26 16:27:12.841175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.841218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.852369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f9f68 00:21:47.368 [2024-11-26 16:27:12.854461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.854506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.865780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f96f8 00:21:47.368 [2024-11-26 16:27:12.867828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.867888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.879207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f8e88 00:21:47.368 [2024-11-26 16:27:12.881429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.881459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.894471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f8618 00:21:47.368 [2024-11-26 16:27:12.896666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.896745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.910883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f7da8 00:21:47.368 [2024-11-26 16:27:12.913253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.913297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.926063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f7538 00:21:47.368 [2024-11-26 16:27:12.928266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.928312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.939759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f6cc8 00:21:47.368 [2024-11-26 16:27:12.941832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.941877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.953528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f6458 00:21:47.368 [2024-11-26 16:27:12.955475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.955520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.966962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f5be8 00:21:47.368 [2024-11-26 16:27:12.968945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.968991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.980489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f5378 00:21:47.368 [2024-11-26 16:27:12.982487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.982532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:12.993984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f4b08 00:21:47.368 [2024-11-26 16:27:12.995940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:12.995984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:47.368 [2024-11-26 16:27:13.007445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f4298 00:21:47.368 [2024-11-26 16:27:13.009435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.368 [2024-11-26 16:27:13.009481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:47.627 [2024-11-26 16:27:13.022629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f3a28 00:21:47.627 [2024-11-26 16:27:13.024478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.627 [2024-11-26 16:27:13.024527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.036039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f31b8 00:21:47.628 [2024-11-26 16:27:13.038109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.038156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.049708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f2948 00:21:47.628 [2024-11-26 16:27:13.051592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.051636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.063235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f20d8 00:21:47.628 [2024-11-26 16:27:13.065209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.065254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.076782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f1868 00:21:47.628 [2024-11-26 16:27:13.078619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.078663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.090731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f0ff8 00:21:47.628 [2024-11-26 16:27:13.092544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.092589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.104125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f0788 00:21:47.628 [2024-11-26 16:27:13.105971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.106016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.117871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166eff18 00:21:47.628 [2024-11-26 16:27:13.119656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.119700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.131305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166ef6a8 00:21:47.628 [2024-11-26 16:27:13.133180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.133225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.144820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166eee38 00:21:47.628 [2024-11-26 16:27:13.146614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.146659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.158307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166ee5c8 00:21:47.628 [2024-11-26 16:27:13.160012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.160058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.171647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166edd58 00:21:47.628 [2024-11-26 16:27:13.173457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.173501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.185208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166ed4e8 00:21:47.628 [2024-11-26 16:27:13.186917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.186960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.198676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166ecc78 00:21:47.628 [2024-11-26 16:27:13.200363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.200416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.212123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166ec408 00:21:47.628 [2024-11-26 16:27:13.213910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.213957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.225748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166ebb98 00:21:47.628 [2024-11-26 16:27:13.227384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.227435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.239098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166eb328 00:21:47.628 [2024-11-26 16:27:13.240869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.240918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.252946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166eaab8 00:21:47.628 [2024-11-26 16:27:13.254634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.254680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:47.628 [2024-11-26 16:27:13.266590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166ea248 00:21:47.628 [2024-11-26 16:27:13.268184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.628 [2024-11-26 16:27:13.268229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:47.887 [2024-11-26 16:27:13.281481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e99d8 00:21:47.887 [2024-11-26 16:27:13.283018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.887 [2024-11-26 16:27:13.283067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:47.887 [2024-11-26 16:27:13.295083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e9168 00:21:47.887 [2024-11-26 16:27:13.296664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.887 [2024-11-26 16:27:13.296709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:47.887 [2024-11-26 16:27:13.308699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e88f8 00:21:47.887 [2024-11-26 16:27:13.310440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.887 [2024-11-26 16:27:13.310492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:47.887 [2024-11-26 16:27:13.322774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e8088 00:21:47.887 [2024-11-26 16:27:13.324267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.887 [2024-11-26 16:27:13.324312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:47.887 [2024-11-26 16:27:13.336318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e7818 00:21:47.887 [2024-11-26 16:27:13.337919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.887 [2024-11-26 16:27:13.337964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:47.887 [2024-11-26 16:27:13.349915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e6fa8 00:21:47.887 [2024-11-26 16:27:13.351383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.887 [2024-11-26 16:27:13.351454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:47.887 [2024-11-26 16:27:13.363296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e6738 00:21:47.887 [2024-11-26 16:27:13.364818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.887 [2024-11-26 16:27:13.364865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:47.887 [2024-11-26 16:27:13.376609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e5ec8 00:21:47.888 [2024-11-26 16:27:13.378108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.888 [2024-11-26 16:27:13.378152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:47.888 [2024-11-26 16:27:13.390076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e5658 00:21:47.888 [2024-11-26 16:27:13.391513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.888 [2024-11-26 16:27:13.391557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:47.888 [2024-11-26 16:27:13.403416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e4de8 00:21:47.888 [2024-11-26 16:27:13.404907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.888 [2024-11-26 16:27:13.404955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:47.888 [2024-11-26 16:27:13.417048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e4578 00:21:47.888 [2024-11-26 16:27:13.418471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.888 [2024-11-26 16:27:13.418515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:47.888 [2024-11-26 16:27:13.430394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e3d08 00:21:47.888 [2024-11-26 16:27:13.431770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.888 [2024-11-26 16:27:13.431815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:47.888 [2024-11-26 16:27:13.443783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e3498 00:21:47.888 [2024-11-26 16:27:13.445242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.888 [2024-11-26 16:27:13.445286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:47.888 [2024-11-26 16:27:13.457484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e2c28 00:21:47.888 [2024-11-26 16:27:13.458850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.888 [2024-11-26 16:27:13.458894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:47.888 [2024-11-26 16:27:13.470927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e23b8 00:21:47.888 [2024-11-26 16:27:13.472264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.888 [2024-11-26 16:27:13.472294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:47.888 [2024-11-26 16:27:13.484659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e1b48 00:21:47.888 [2024-11-26 16:27:13.486134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.888 [2024-11-26 16:27:13.486177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:47.888 [2024-11-26 16:27:13.498164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e12d8 00:21:47.888 [2024-11-26 16:27:13.499487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.888 [2024-11-26 16:27:13.499532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:47.888 [2024-11-26 16:27:13.511778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e0a68 00:21:47.888 [2024-11-26 16:27:13.513154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.888 [2024-11-26 16:27:13.513198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:47.888 [2024-11-26 16:27:13.525298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e01f8 00:21:47.888 [2024-11-26 16:27:13.526593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:47.888 [2024-11-26 16:27:13.526637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:48.147 [2024-11-26 16:27:13.539954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166df988 00:21:48.147 [2024-11-26 16:27:13.541316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.147 [2024-11-26 16:27:13.541406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:48.147 [2024-11-26 16:27:13.553674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166df118 00:21:48.147 [2024-11-26 16:27:13.554976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.147 [2024-11-26 16:27:13.555023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:48.147 [2024-11-26 16:27:13.567376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166de8a8 00:21:48.147 [2024-11-26 16:27:13.568589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.147 [2024-11-26 16:27:13.568635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:48.147 [2024-11-26 16:27:13.580934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166de038 00:21:48.147 [2024-11-26 16:27:13.582315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.147 [2024-11-26 16:27:13.582402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:48.147 18344.00 IOPS, 71.66 MiB/s [2024-11-26T16:27:13.800Z] [2024-11-26 16:27:13.603394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166de038 00:21:48.147 [2024-11-26 16:27:13.606075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.147 [2024-11-26 16:27:13.606122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:48.147 [2024-11-26 16:27:13.619118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166de8a8 00:21:48.147 [2024-11-26 16:27:13.621634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.147 [2024-11-26 16:27:13.621681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:48.147 [2024-11-26 16:27:13.633940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166df118 00:21:48.147 [2024-11-26 16:27:13.636271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.147 [2024-11-26 16:27:13.636318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:48.147 [2024-11-26 16:27:13.648394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166df988 00:21:48.147 [2024-11-26 16:27:13.650694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.147 [2024-11-26 16:27:13.650741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:48.147 [2024-11-26 16:27:13.662771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e01f8 00:21:48.147 [2024-11-26 16:27:13.665048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.147 [2024-11-26 16:27:13.665109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:48.147 [2024-11-26 16:27:13.677547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e0a68 00:21:48.147 [2024-11-26 16:27:13.679751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.147 [2024-11-26 16:27:13.679797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:48.147 [2024-11-26 16:27:13.691982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e12d8 00:21:48.147 [2024-11-26 16:27:13.694260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.147 [2024-11-26 16:27:13.694307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:48.147 [2024-11-26 16:27:13.706419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e1b48 00:21:48.147 [2024-11-26 16:27:13.708590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.147 [2024-11-26 16:27:13.708635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.147 [2024-11-26 16:27:13.720911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e23b8 00:21:48.147 [2024-11-26 16:27:13.723103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.147 [2024-11-26 16:27:13.723149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:48.147 [2024-11-26 16:27:13.735337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e2c28 00:21:48.147 [2024-11-26 16:27:13.737639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.147 [2024-11-26 16:27:13.737686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:48.147 [2024-11-26 16:27:13.749732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e3498 00:21:48.148 [2024-11-26 16:27:13.751897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.148 [2024-11-26 16:27:13.751941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:48.148 [2024-11-26 16:27:13.764314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e3d08 00:21:48.148 [2024-11-26 16:27:13.766505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.148 [2024-11-26 16:27:13.766551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:48.148 [2024-11-26 16:27:13.778246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e4578 00:21:48.148 [2024-11-26 16:27:13.780270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.148 [2024-11-26 16:27:13.780315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:48.148 [2024-11-26 16:27:13.792044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e4de8 00:21:48.148 [2024-11-26 16:27:13.794287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.148 [2024-11-26 16:27:13.794336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:48.407 [2024-11-26 16:27:13.806765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e5658 00:21:48.407 [2024-11-26 16:27:13.808961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.407 [2024-11-26 16:27:13.809014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:48.407 [2024-11-26 16:27:13.820602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e5ec8 00:21:48.407 [2024-11-26 16:27:13.822597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.407 [2024-11-26 16:27:13.822643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:48.407 [2024-11-26 16:27:13.834149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e6738 00:21:48.407 [2024-11-26 16:27:13.836140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.407 [2024-11-26 16:27:13.836169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:48.407 [2024-11-26 16:27:13.847994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e6fa8 00:21:48.407 [2024-11-26 16:27:13.850072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.407 [2024-11-26 16:27:13.850118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:48.407 [2024-11-26 16:27:13.861924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e7818 00:21:48.407 [2024-11-26 16:27:13.863907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.407 [2024-11-26 16:27:13.863953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:48.407 [2024-11-26 16:27:13.875613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e8088 00:21:48.407 [2024-11-26 16:27:13.877622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.407 [2024-11-26 16:27:13.877667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:48.407 [2024-11-26 16:27:13.889238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e88f8 00:21:48.407 [2024-11-26 16:27:13.891188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.407 [2024-11-26 16:27:13.891233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:48.407 [2024-11-26 16:27:13.902874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e9168 00:21:48.407 [2024-11-26 16:27:13.904810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.407 [2024-11-26 16:27:13.904842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:48.408 [2024-11-26 16:27:13.918483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166e99d8 00:21:48.408 [2024-11-26 16:27:13.920547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.408 [2024-11-26 16:27:13.920594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:48.408 [2024-11-26 16:27:13.934731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166ea248 00:21:48.408 [2024-11-26 16:27:13.936789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.408 [2024-11-26 16:27:13.936822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.408 [2024-11-26 16:27:13.950086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166eaab8 00:21:48.408 [2024-11-26 16:27:13.952132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.408 [2024-11-26 16:27:13.952176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:48.408 [2024-11-26 16:27:13.965072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166eb328 00:21:48.408 [2024-11-26 16:27:13.967092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.408 [2024-11-26 16:27:13.967137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:48.408 [2024-11-26 16:27:13.978767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166ebb98 00:21:48.408 [2024-11-26 16:27:13.980582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.408 [2024-11-26 16:27:13.980628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:48.408 [2024-11-26 16:27:13.992121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166ec408 00:21:48.408 [2024-11-26 16:27:13.994131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.408 [2024-11-26 16:27:13.994161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:48.408 [2024-11-26 16:27:14.005800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166ecc78 00:21:48.408 [2024-11-26 16:27:14.007595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.408 [2024-11-26 16:27:14.007640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:48.408 [2024-11-26 16:27:14.019436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166ed4e8 00:21:48.408 [2024-11-26 16:27:14.021261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.408 [2024-11-26 16:27:14.021306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:48.408 [2024-11-26 16:27:14.032961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166edd58 00:21:48.408 [2024-11-26 16:27:14.034794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.408 [2024-11-26 16:27:14.034838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:48.408 [2024-11-26 16:27:14.046615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166ee5c8 00:21:48.408 [2024-11-26 16:27:14.048309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.408 [2024-11-26 16:27:14.048375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.061479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166eee38 00:21:48.668 [2024-11-26 16:27:14.063194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.063242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.075266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166ef6a8 00:21:48.668 [2024-11-26 16:27:14.077018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.077056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.088972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166eff18 00:21:48.668 [2024-11-26 16:27:14.090683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.090730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.102445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f0788 00:21:48.668 [2024-11-26 16:27:14.104091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.104137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.116393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f0ff8 00:21:48.668 [2024-11-26 16:27:14.118077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.118121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.129941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f1868 00:21:48.668 [2024-11-26 16:27:14.131604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.131649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.143567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f20d8 00:21:48.668 [2024-11-26 16:27:14.145278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.145325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.157104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f2948 00:21:48.668 [2024-11-26 16:27:14.158788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.158832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.170925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f31b8 00:21:48.668 [2024-11-26 16:27:14.172501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.172532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.184427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f3a28 00:21:48.668 [2024-11-26 16:27:14.186031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.186077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.198038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f4298 00:21:48.668 [2024-11-26 16:27:14.199641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.199671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.211835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f4b08 00:21:48.668 [2024-11-26 16:27:14.213427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.213497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.225711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f5378 00:21:48.668 [2024-11-26 16:27:14.227298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.227366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.239334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f5be8 00:21:48.668 [2024-11-26 16:27:14.240882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.240929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.253007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f6458 00:21:48.668 [2024-11-26 16:27:14.254557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.254601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.266598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f6cc8 00:21:48.668 [2024-11-26 16:27:14.268082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.268127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.280459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f7538 00:21:48.668 [2024-11-26 16:27:14.282005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.282050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.294334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f7da8 00:21:48.668 [2024-11-26 16:27:14.295846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.668 [2024-11-26 16:27:14.295891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:48.668 [2024-11-26 16:27:14.307981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f8618 00:21:48.668 [2024-11-26 16:27:14.309495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.669 [2024-11-26 16:27:14.309541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:48.928 [2024-11-26 16:27:14.323031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f8e88 00:21:48.928 [2024-11-26 16:27:14.324495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.928 [2024-11-26 16:27:14.324545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:48.928 [2024-11-26 16:27:14.337014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f96f8 00:21:48.928 [2024-11-26 16:27:14.338483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.928 [2024-11-26 16:27:14.338530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:48.928 [2024-11-26 16:27:14.350529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166f9f68 00:21:48.928 [2024-11-26 16:27:14.351925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.928 [2024-11-26 16:27:14.351970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:48.928 [2024-11-26 16:27:14.364007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fa7d8 00:21:48.928 [2024-11-26 16:27:14.365440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.928 [2024-11-26 16:27:14.365502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:48.928 [2024-11-26 16:27:14.377567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fb048 00:21:48.928 [2024-11-26 16:27:14.378891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.928 [2024-11-26 16:27:14.378936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.928 [2024-11-26 16:27:14.390954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fb8b8 00:21:48.928 [2024-11-26 16:27:14.392263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.928 [2024-11-26 16:27:14.392310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.928 [2024-11-26 16:27:14.404376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fc128 00:21:48.928 [2024-11-26 16:27:14.405740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.928 [2024-11-26 16:27:14.405785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:48.928 [2024-11-26 16:27:14.417825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fc998 00:21:48.928 [2024-11-26 16:27:14.419101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.928 [2024-11-26 16:27:14.419146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:48.928 [2024-11-26 16:27:14.431460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fd208 00:21:48.928 [2024-11-26 16:27:14.432777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.928 [2024-11-26 16:27:14.432825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:48.928 [2024-11-26 16:27:14.444930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fda78 00:21:48.928 [2024-11-26 16:27:14.446214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.928 [2024-11-26 16:27:14.446259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:48.928 [2024-11-26 16:27:14.458411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fe2e8 00:21:48.928 [2024-11-26 16:27:14.459643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.928 [2024-11-26 16:27:14.459688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:48.928 [2024-11-26 16:27:14.471718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166feb58 00:21:48.928 [2024-11-26 16:27:14.473049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.928 [2024-11-26 16:27:14.473096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:48.928 [2024-11-26 16:27:14.490823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fef90 00:21:48.928 [2024-11-26 16:27:14.493141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.929 [2024-11-26 16:27:14.493186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:48.929 [2024-11-26 16:27:14.504548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166feb58 00:21:48.929 [2024-11-26 16:27:14.506761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.929 [2024-11-26 16:27:14.506807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:48.929 [2024-11-26 16:27:14.518097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fe2e8 00:21:48.929 [2024-11-26 16:27:14.520316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.929 [2024-11-26 16:27:14.520381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:48.929 [2024-11-26 16:27:14.531675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fda78 00:21:48.929 [2024-11-26 16:27:14.533890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.929 [2024-11-26 16:27:14.533935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:48.929 [2024-11-26 16:27:14.545135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fd208 00:21:48.929 [2024-11-26 16:27:14.547360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.929 [2024-11-26 16:27:14.547396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:48.929 [2024-11-26 16:27:14.558553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fc998 00:21:48.929 [2024-11-26 16:27:14.560678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.929 [2024-11-26 16:27:14.560745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:48.929 [2024-11-26 16:27:14.572248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fc128 00:21:48.929 [2024-11-26 16:27:14.574620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:48.929 [2024-11-26 16:27:14.574667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:49.188 [2024-11-26 16:27:14.586905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a362a0) with pdu=0x2000166fb8b8 00:21:49.188 18217.50 IOPS, 71.16 MiB/s [2024-11-26T16:27:14.841Z] [2024-11-26 16:27:14.589203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.188 [2024-11-26 16:27:14.589230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:49.188 00:21:49.188 Latency(us) 00:21:49.188 [2024-11-26T16:27:14.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.188 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:49.188 nvme0n1 : 2.00 18250.77 71.29 0.00 0.00 7007.08 3932.16 25737.77 00:21:49.188 [2024-11-26T16:27:14.841Z] =================================================================================================================== 00:21:49.188 [2024-11-26T16:27:14.841Z] Total : 18250.77 71.29 0.00 0.00 7007.08 3932.16 25737.77 00:21:49.188 { 00:21:49.188 "results": [ 00:21:49.188 { 00:21:49.188 "job": "nvme0n1", 00:21:49.188 "core_mask": "0x2", 00:21:49.188 "workload": "randwrite", 00:21:49.188 "status": "finished", 00:21:49.188 "queue_depth": 128, 00:21:49.188 "io_size": 4096, 00:21:49.188 "runtime": 2.003368, 00:21:49.188 "iops": 18250.765710543445, 00:21:49.188 "mibps": 71.29205355681033, 00:21:49.188 "io_failed": 0, 00:21:49.188 "io_timeout": 0, 00:21:49.188 "avg_latency_us": 7007.078476253937, 00:21:49.188 "min_latency_us": 3932.16, 00:21:49.188 "max_latency_us": 25737.774545454544 00:21:49.188 } 00:21:49.188 ], 00:21:49.188 "core_count": 1 00:21:49.188 } 00:21:49.188 16:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:49.188 16:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:49.188 16:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:49.188 16:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:49.188 | .driver_specific 00:21:49.188 | .nvme_error 00:21:49.188 | .status_code 00:21:49.188 | .command_transient_transport_error' 00:21:49.448 16:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:21:49.448 16:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94924 00:21:49.448 16:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94924 ']' 00:21:49.448 16:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94924 00:21:49.448 16:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:49.448 16:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.448 16:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94924 00:21:49.448 16:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:49.448 killing process with pid 94924 00:21:49.448 16:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:49.448 16:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94924' 00:21:49.448 Received shutdown signal, test time was about 2.000000 seconds 00:21:49.448 00:21:49.448 Latency(us) 00:21:49.448 [2024-11-26T16:27:15.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.448 [2024-11-26T16:27:15.101Z] =================================================================================================================== 00:21:49.448 [2024-11-26T16:27:15.101Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:49.448 16:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94924 00:21:49.448 16:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94924 00:21:49.448 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:49.448 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:49.448 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:49.448 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:49.448 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:49.448 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:49.448 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94977 00:21:49.448 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94977 /var/tmp/bperf.sock 00:21:49.448 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94977 ']' 00:21:49.448 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:49.448 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:49.448 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:49.448 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.448 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:49.448 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:49.448 Zero copy mechanism will not be used. 00:21:49.448 [2024-11-26 16:27:15.079588] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:21:49.448 [2024-11-26 16:27:15.079690] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94977 ] 00:21:49.707 [2024-11-26 16:27:15.215687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.707 [2024-11-26 16:27:15.234426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.707 [2024-11-26 16:27:15.261660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:49.707 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.707 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:49.707 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:49.707 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:49.966 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:49.966 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.966 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:49.966 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.966 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:49.966 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:50.533 nvme0n1 00:21:50.533 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:50.533 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.533 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:50.533 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.533 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:50.533 16:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:50.533 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:50.533 Zero copy mechanism will not be used. 00:21:50.534 Running I/O for 2 seconds... 00:21:50.534 [2024-11-26 16:27:16.092134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.092229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.092257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.097416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.097523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.097546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.102225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.102333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.102354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.107223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.107304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.107325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.111877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.111996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.112017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.116646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.116776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.116799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.121369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.121500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.121520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.126267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.126358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.126410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.131015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.131133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.131153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.135721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.135849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.135870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.140793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.140882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.140904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.145551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.145651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.145671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.150264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.150392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.150412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.155245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.155347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.155369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.160060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.160157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.160177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.164910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.165001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.165036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.169675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.169792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.169812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.174259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.174412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.174431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.534 [2024-11-26 16:27:16.179253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.534 [2024-11-26 16:27:16.179407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.534 [2024-11-26 16:27:16.179430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.794 [2024-11-26 16:27:16.184513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.184622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.184645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.188959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.189333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.189386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.193896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.194211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.194242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.198608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.198926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.198958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.203184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.203510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.203540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.207781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.208096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.208128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.212477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.212815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.212846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.217206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.217531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.217561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.221828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.222142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.222173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.226394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.226707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.226737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.231002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.231316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.231357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.235500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.235818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.235848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.240090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.240421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.240444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.244672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.245053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.245083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.249451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.249776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.249806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.254228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.254592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.254624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.258944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.259261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.259292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.263499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.263817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.263847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.268041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.268375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.268420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.272593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.272948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.272988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.277270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.277607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.277637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.282005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.282323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.282362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.286623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.286942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.286972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.291126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.291452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.291492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.295723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.296029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.296051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.300193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.300523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.300569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.795 [2024-11-26 16:27:16.304896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.795 [2024-11-26 16:27:16.305246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.795 [2024-11-26 16:27:16.305275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.309466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.309781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.309812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.314166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.314490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.314527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.318816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.319135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.319165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.323430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.323745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.323774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.327964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.328302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.328332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.332527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.332875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.332906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.337208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.337538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.337571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.341909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.342223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.342253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.346465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.346777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.346807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.350984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.351300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.351329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.355539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.355853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.355882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.360107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.360454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.360491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.364639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.365035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.365066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.369520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.369826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.369856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.374136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.374465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.374495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.378786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.379091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.379121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.383412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.383727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.383757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.387943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.388256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.388286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.392448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.392796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.392826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.397082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.397406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.397472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.401835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.402143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.402172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.406490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.406825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.406855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.411176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.411501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.411528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.415836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.416150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.416180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.420308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.420633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.420662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.424913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.425285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.796 [2024-11-26 16:27:16.425314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:50.796 [2024-11-26 16:27:16.429736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.796 [2024-11-26 16:27:16.430066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.797 [2024-11-26 16:27:16.430096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:50.797 [2024-11-26 16:27:16.434334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.797 [2024-11-26 16:27:16.434664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.797 [2024-11-26 16:27:16.434693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:50.797 [2024-11-26 16:27:16.439226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:50.797 [2024-11-26 16:27:16.439588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.797 [2024-11-26 16:27:16.439621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.444177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.444506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.444538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.449153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.449488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.449532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.453755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.454071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.454103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.458399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.458716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.458746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.463084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.463411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.463442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.467737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.468052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.468082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.472442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.472781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.472818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.477190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.477529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.477560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.481893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.482211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.482241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.486531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.486845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.486875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.491096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.491443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.491473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.495798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.496105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.496135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.500414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.500720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.500790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.505074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.505429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.505452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.509801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.510117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.510147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.514469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.514784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.514814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.519044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.519358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.519397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.523648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.523964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.523994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.528170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.528516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.528544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.532880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.057 [2024-11-26 16:27:16.533216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.057 [2024-11-26 16:27:16.533246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.057 [2024-11-26 16:27:16.537577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.537910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.537939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.542270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.542596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.542625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.546883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.547196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.547226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.551427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.551740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.551770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.555966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.556280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.556309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.560449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.560790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.560821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.565199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.565554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.565586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.569853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.570167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.570198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.574404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.574720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.574748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.578919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.579232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.579261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.583478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.583793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.583821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.588070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.588383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.588442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.592608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.592958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.592985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.597292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.597625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.597655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.601823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.602140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.602170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.606411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.606725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.606754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.610963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.611278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.611308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.615690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.616012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.616041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.620259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.620606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.620635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.624944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.625328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.625369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.629688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.629996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.630030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.634277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.634616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.634645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.638841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.639158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.639184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.643425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.643738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.643768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.647922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.648235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.648265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.652566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.652930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.652960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.058 [2024-11-26 16:27:16.657347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.058 [2024-11-26 16:27:16.657695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.058 [2024-11-26 16:27:16.657724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.059 [2024-11-26 16:27:16.661956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.059 [2024-11-26 16:27:16.662274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.059 [2024-11-26 16:27:16.662304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.059 [2024-11-26 16:27:16.666611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.059 [2024-11-26 16:27:16.666916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.059 [2024-11-26 16:27:16.666946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.059 [2024-11-26 16:27:16.671224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.059 [2024-11-26 16:27:16.671549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.059 [2024-11-26 16:27:16.671582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.059 [2024-11-26 16:27:16.675713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.059 [2024-11-26 16:27:16.676026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.059 [2024-11-26 16:27:16.676056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.059 [2024-11-26 16:27:16.680214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.059 [2024-11-26 16:27:16.680560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.059 [2024-11-26 16:27:16.680590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.059 [2024-11-26 16:27:16.684892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.059 [2024-11-26 16:27:16.685248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.059 [2024-11-26 16:27:16.685277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.059 [2024-11-26 16:27:16.689508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.059 [2024-11-26 16:27:16.689820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.059 [2024-11-26 16:27:16.689850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.059 [2024-11-26 16:27:16.694080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.059 [2024-11-26 16:27:16.694395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.059 [2024-11-26 16:27:16.694456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.059 [2024-11-26 16:27:16.698775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.059 [2024-11-26 16:27:16.699101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.059 [2024-11-26 16:27:16.699143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.703898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.704231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.704280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.708618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.708993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.709043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.713176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.713254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.713274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.717958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.718049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.718070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.722582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.722673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.722693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.727063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.727153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.727173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.731641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.731720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.731740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.736287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.736392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.736414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.741002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.741147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.741167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.745703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.745796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.745816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.750185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.750275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.750295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.754728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.754821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.754841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.759132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.759222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.759242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.763701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.763777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.763797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.768255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.768345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.768381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.772767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.772863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.772884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.777462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.777555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.777575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.781906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.781997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.782017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.786476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.786565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.786585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.790856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.790946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.790966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.320 [2024-11-26 16:27:16.795368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.320 [2024-11-26 16:27:16.795459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.320 [2024-11-26 16:27:16.795478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.799775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.799866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.799886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.804190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.804282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.804302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.808646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.808776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.808796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.813156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.813245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.813265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.817764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.817855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.817875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.822191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.822284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.822303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.826677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.826769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.826788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.831191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.831282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.831302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.835657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.835751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.835770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.840070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.840161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.840181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.844553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.844644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.844664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.849163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.849270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.849290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.853641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.853733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.853767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.858080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.858171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.858191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.862619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.862710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.862730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.867120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.867209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.867229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.871721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.871813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.871832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.876208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.876299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.876319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.880796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.880880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.880902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.885479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.885569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.885590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.889968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.890062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.890082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.894552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.894646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.894665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.899102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.899196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.899217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.903624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.903716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.903735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.908041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.908131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.908151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.912588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.912666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.912685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.917561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.321 [2024-11-26 16:27:16.917644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.321 [2024-11-26 16:27:16.917665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.321 [2024-11-26 16:27:16.922228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.322 [2024-11-26 16:27:16.922320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.322 [2024-11-26 16:27:16.922339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.322 [2024-11-26 16:27:16.926768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.322 [2024-11-26 16:27:16.926861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.322 [2024-11-26 16:27:16.926881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.322 [2024-11-26 16:27:16.931277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.322 [2024-11-26 16:27:16.931392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.322 [2024-11-26 16:27:16.931411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.322 [2024-11-26 16:27:16.935741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.322 [2024-11-26 16:27:16.935832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.322 [2024-11-26 16:27:16.935851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.322 [2024-11-26 16:27:16.940193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.322 [2024-11-26 16:27:16.940286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.322 [2024-11-26 16:27:16.940305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.322 [2024-11-26 16:27:16.944695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.322 [2024-11-26 16:27:16.944829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.322 [2024-11-26 16:27:16.944849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.322 [2024-11-26 16:27:16.949398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.322 [2024-11-26 16:27:16.949501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.322 [2024-11-26 16:27:16.949521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.322 [2024-11-26 16:27:16.953851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.322 [2024-11-26 16:27:16.953941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.322 [2024-11-26 16:27:16.953963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.322 [2024-11-26 16:27:16.958322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.322 [2024-11-26 16:27:16.958426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.322 [2024-11-26 16:27:16.958446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.322 [2024-11-26 16:27:16.962965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.322 [2024-11-26 16:27:16.963054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.322 [2024-11-26 16:27:16.963075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.584 [2024-11-26 16:27:16.967931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.584 [2024-11-26 16:27:16.968048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.584 [2024-11-26 16:27:16.968070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.584 [2024-11-26 16:27:16.972716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.584 [2024-11-26 16:27:16.972835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.584 [2024-11-26 16:27:16.972875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.584 [2024-11-26 16:27:16.977882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.584 [2024-11-26 16:27:16.977975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.584 [2024-11-26 16:27:16.977996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.584 [2024-11-26 16:27:16.983109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.584 [2024-11-26 16:27:16.983203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.584 [2024-11-26 16:27:16.983226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.584 [2024-11-26 16:27:16.988231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.584 [2024-11-26 16:27:16.988365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.584 [2024-11-26 16:27:16.988389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.584 [2024-11-26 16:27:16.993950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.584 [2024-11-26 16:27:16.994053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.584 [2024-11-26 16:27:16.994073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.584 [2024-11-26 16:27:16.999075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.584 [2024-11-26 16:27:16.999173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.584 [2024-11-26 16:27:16.999193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.584 [2024-11-26 16:27:17.004117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.584 [2024-11-26 16:27:17.004214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.584 [2024-11-26 16:27:17.004234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.584 [2024-11-26 16:27:17.009147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.584 [2024-11-26 16:27:17.009244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.584 [2024-11-26 16:27:17.009264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.584 [2024-11-26 16:27:17.014015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.584 [2024-11-26 16:27:17.014109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.584 [2024-11-26 16:27:17.014129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.584 [2024-11-26 16:27:17.019001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.584 [2024-11-26 16:27:17.019099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.584 [2024-11-26 16:27:17.019120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.584 [2024-11-26 16:27:17.023919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.584 [2024-11-26 16:27:17.024018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.584 [2024-11-26 16:27:17.024038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.584 [2024-11-26 16:27:17.028783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.584 [2024-11-26 16:27:17.028900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.584 [2024-11-26 16:27:17.028923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.584 [2024-11-26 16:27:17.033667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.584 [2024-11-26 16:27:17.033790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.584 [2024-11-26 16:27:17.033811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.584 [2024-11-26 16:27:17.038169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.584 [2024-11-26 16:27:17.038258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.584 [2024-11-26 16:27:17.038277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.584 [2024-11-26 16:27:17.042680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.042769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.042789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.047121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.047213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.047233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.051622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.051712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.051731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.056079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.056182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.056201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.060557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.060653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.060673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.065003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.065095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.065129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.069634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.069727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.069760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.074017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.074106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.074126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.078501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.078591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.078611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.082958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.083048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.083068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.087630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.087722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.087742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.585 6652.00 IOPS, 831.50 MiB/s [2024-11-26T16:27:17.238Z] [2024-11-26 16:27:17.093578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.093672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.093694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.098147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.098236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.098256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.102833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.102923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.102942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.107322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.107425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.107444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.111729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.111828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.111847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.116216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.116309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.116329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.120644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.120774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.120795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.125171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.125262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.125282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.129761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.129852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.129872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.134232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.134321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.134341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.138750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.138850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.138870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.143274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.143364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.143396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.147784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.147875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.147894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.152221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.152310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.152330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.156860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.156944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.156965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.585 [2024-11-26 16:27:17.161963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.585 [2024-11-26 16:27:17.162066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.585 [2024-11-26 16:27:17.162086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.586 [2024-11-26 16:27:17.166933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.586 [2024-11-26 16:27:17.167041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.586 [2024-11-26 16:27:17.167062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.586 [2024-11-26 16:27:17.171976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.586 [2024-11-26 16:27:17.172058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.586 [2024-11-26 16:27:17.172079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.586 [2024-11-26 16:27:17.177087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.586 [2024-11-26 16:27:17.177208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.586 [2024-11-26 16:27:17.177230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.586 [2024-11-26 16:27:17.182455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.586 [2024-11-26 16:27:17.182568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.586 [2024-11-26 16:27:17.182589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.586 [2024-11-26 16:27:17.187565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.586 [2024-11-26 16:27:17.187674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.586 [2024-11-26 16:27:17.187710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.586 [2024-11-26 16:27:17.192642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.586 [2024-11-26 16:27:17.192779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.586 [2024-11-26 16:27:17.192803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.586 [2024-11-26 16:27:17.197644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.586 [2024-11-26 16:27:17.197752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.586 [2024-11-26 16:27:17.197773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.586 [2024-11-26 16:27:17.202409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.586 [2024-11-26 16:27:17.202510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.586 [2024-11-26 16:27:17.202530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.586 [2024-11-26 16:27:17.207265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.586 [2024-11-26 16:27:17.207357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.586 [2024-11-26 16:27:17.207390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.586 [2024-11-26 16:27:17.211773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.586 [2024-11-26 16:27:17.211876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.586 [2024-11-26 16:27:17.211896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.586 [2024-11-26 16:27:17.216334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.586 [2024-11-26 16:27:17.216467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.586 [2024-11-26 16:27:17.216487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.586 [2024-11-26 16:27:17.221145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.586 [2024-11-26 16:27:17.221250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.586 [2024-11-26 16:27:17.221269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.586 [2024-11-26 16:27:17.226078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.586 [2024-11-26 16:27:17.226172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.586 [2024-11-26 16:27:17.226194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.231500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.231584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.231622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.236952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.237026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.237052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.242288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.242373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.242397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.248566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.248652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.248675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.253346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.253440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.253490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.258052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.258146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.258166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.262992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.263094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.263114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.267798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.267910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.267931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.272553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.272649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.272671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.277259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.277353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.277373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.282063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.282155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.282175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.286744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.286835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.286856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.291399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.291500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.291520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.296059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.296154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.296175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.301105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.301215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.301236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.305815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.305907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.305927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.310453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.310557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.310577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.315415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.315523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.315542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.320222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.320315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.320336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.324928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.325028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.325050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.329897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.329964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.329986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.334543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.334638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.334659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.851 [2024-11-26 16:27:17.339127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.851 [2024-11-26 16:27:17.339227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.851 [2024-11-26 16:27:17.339247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.343837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.343916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.343936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.348772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.348858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.348880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.353536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.353613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.353633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.358263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.358340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.358388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.363214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.363317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.363338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.368075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.368171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.368192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.373713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.373786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.373847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.379204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.379306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.379329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.384600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.384756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.384780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.390174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.390268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.390290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.395187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.395281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.395302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.399844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.399936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.399956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.404425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.404527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.404547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.409297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.409417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.409439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.414133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.414232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.414252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.418727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.418823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.418843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.423203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.423293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.423318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.427729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.427821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.427841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.432465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.432567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.432588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.437160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.437252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.437273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.442002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.442078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.442098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.446822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.446911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.446931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.451546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.451650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.451670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.456012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.456107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.456127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.460508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.460608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.460628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.464970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.465084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.465104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.469585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.852 [2024-11-26 16:27:17.469688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.852 [2024-11-26 16:27:17.469708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.852 [2024-11-26 16:27:17.474255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.853 [2024-11-26 16:27:17.474348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.853 [2024-11-26 16:27:17.474368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:51.853 [2024-11-26 16:27:17.478677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.853 [2024-11-26 16:27:17.478773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.853 [2024-11-26 16:27:17.478793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:51.853 [2024-11-26 16:27:17.483176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.853 [2024-11-26 16:27:17.483273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.853 [2024-11-26 16:27:17.483292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:51.853 [2024-11-26 16:27:17.487774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.853 [2024-11-26 16:27:17.487870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.853 [2024-11-26 16:27:17.487889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.853 [2024-11-26 16:27:17.492484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:51.853 [2024-11-26 16:27:17.492580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.853 [2024-11-26 16:27:17.492603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.128 [2024-11-26 16:27:17.497489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.128 [2024-11-26 16:27:17.497623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.128 [2024-11-26 16:27:17.497646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.128 [2024-11-26 16:27:17.502935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.128 [2024-11-26 16:27:17.503050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.128 [2024-11-26 16:27:17.503097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.128 [2024-11-26 16:27:17.508347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.128 [2024-11-26 16:27:17.508475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.128 [2024-11-26 16:27:17.508498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.128 [2024-11-26 16:27:17.513564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.128 [2024-11-26 16:27:17.513656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.128 [2024-11-26 16:27:17.513691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.128 [2024-11-26 16:27:17.518395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.128 [2024-11-26 16:27:17.518498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.128 [2024-11-26 16:27:17.518518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.128 [2024-11-26 16:27:17.522994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.128 [2024-11-26 16:27:17.523085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.128 [2024-11-26 16:27:17.523105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.128 [2024-11-26 16:27:17.527471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.128 [2024-11-26 16:27:17.527572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.128 [2024-11-26 16:27:17.527593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.128 [2024-11-26 16:27:17.532063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.128 [2024-11-26 16:27:17.532158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.128 [2024-11-26 16:27:17.532178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.128 [2024-11-26 16:27:17.536535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.128 [2024-11-26 16:27:17.536627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.128 [2024-11-26 16:27:17.536647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.128 [2024-11-26 16:27:17.540994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.128 [2024-11-26 16:27:17.541091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.128 [2024-11-26 16:27:17.541126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.128 [2024-11-26 16:27:17.545672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.128 [2024-11-26 16:27:17.545790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.128 [2024-11-26 16:27:17.545810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.128 [2024-11-26 16:27:17.550208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.128 [2024-11-26 16:27:17.550301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.128 [2024-11-26 16:27:17.550322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.128 [2024-11-26 16:27:17.554741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.128 [2024-11-26 16:27:17.554831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.128 [2024-11-26 16:27:17.554851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.128 [2024-11-26 16:27:17.559256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.128 [2024-11-26 16:27:17.559348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.559368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.563705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.563794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.563814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.568204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.568304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.568324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.572709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.572836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.572858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.577372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.577476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.577496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.581813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.581917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.581937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.586422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.586511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.586531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.590916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.591013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.591032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.595474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.595576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.595596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.599928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.600024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.600044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.604592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.604658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.604678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.609326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.609439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.609470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.614124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.614215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.614235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.618991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.619083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.619103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.623649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.623762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.623782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.628527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.628628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.628647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.633255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.633344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.633364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.637757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.637858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.637878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.642218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.642309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.642329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.646705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.646818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.646837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.651126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.651229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.651249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.655798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.655879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.655899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.660680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.660802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.660826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.665233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.665325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.665345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.669716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.669812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.669832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.674164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.674254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.674273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.678603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.678709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.678729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.683068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.683160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.683179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.687585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.687683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.687702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.692031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.692128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.692147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.696451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.696555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.696574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.701200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.701291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.701311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.705669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.705770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.705790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.710100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.710190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.710210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.714593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.714694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.714714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.719277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.719379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.719434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.723992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.724093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.724115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.728584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.728673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.728693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.733233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.733324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.733343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.737719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.737811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.737830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.742206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.742299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.742318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.746647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.746739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.746758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.751039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.751131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.751151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.755570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.755672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.755692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.760107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.760207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.760227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.765372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.129 [2024-11-26 16:27:17.765467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.129 [2024-11-26 16:27:17.765492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.129 [2024-11-26 16:27:17.770888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.392 [2024-11-26 16:27:17.770973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.392 [2024-11-26 16:27:17.771000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.392 [2024-11-26 16:27:17.776217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.392 [2024-11-26 16:27:17.776298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.392 [2024-11-26 16:27:17.776322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.392 [2024-11-26 16:27:17.781295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.392 [2024-11-26 16:27:17.781405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.392 [2024-11-26 16:27:17.781427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.392 [2024-11-26 16:27:17.785828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.392 [2024-11-26 16:27:17.785919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.392 [2024-11-26 16:27:17.785941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.392 [2024-11-26 16:27:17.790440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.392 [2024-11-26 16:27:17.790540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.392 [2024-11-26 16:27:17.790560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.392 [2024-11-26 16:27:17.794910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.392 [2024-11-26 16:27:17.795009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.392 [2024-11-26 16:27:17.795028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.392 [2024-11-26 16:27:17.799475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.392 [2024-11-26 16:27:17.799574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.392 [2024-11-26 16:27:17.799595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.392 [2024-11-26 16:27:17.803900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.392 [2024-11-26 16:27:17.803999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.392 [2024-11-26 16:27:17.804019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.392 [2024-11-26 16:27:17.808342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.392 [2024-11-26 16:27:17.808445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.392 [2024-11-26 16:27:17.808464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.392 [2024-11-26 16:27:17.812809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.392 [2024-11-26 16:27:17.812882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.392 [2024-11-26 16:27:17.812904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.392 [2024-11-26 16:27:17.817321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.817430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.817461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.821814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.821904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.821924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.826213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.826302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.826322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.830761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.830859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.830878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.835142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.835244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.835264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.839605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.839687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.839706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.844087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.844188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.844207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.848506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.848604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.848624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.853137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.853236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.853256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.857661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.857751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.857770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.862175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.862251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.862271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.866654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.866745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.866765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.871193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.871289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.871308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.875700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.875809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.875828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.880121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.880220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.880242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.884681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.884796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.884817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.889326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.889422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.889453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.893987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.894077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.894097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.898652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.898747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.898767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.903239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.903338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.903374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.907913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.908014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.908034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.912624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.912714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.912774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.917357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.917467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.917487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.921798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.921898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.921918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.926303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.926402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.926422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.930768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.930857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.930877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.393 [2024-11-26 16:27:17.935441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.393 [2024-11-26 16:27:17.935535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.393 [2024-11-26 16:27:17.935556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:17.939846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:17.939937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:17.939956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:17.944394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:17.944493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:17.944513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:17.948827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:17.948895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:17.948917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:17.953427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:17.953518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:17.953538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:17.957797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:17.957888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:17.957908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:17.962246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:17.962338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:17.962358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:17.966756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:17.966852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:17.966872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:17.971238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:17.971337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:17.971374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:17.975783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:17.975876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:17.975895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:17.980709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:17.980815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:17.980837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:17.986185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:17.986282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:17.986304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:17.991639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:17.991761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:17.991781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:17.996647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:17.996776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:17.996799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:18.001771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:18.001850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:18.001870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:18.007088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:18.007184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:18.007205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:18.012612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:18.012686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:18.012708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:18.017855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:18.017952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:18.017972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:18.022945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:18.023036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:18.023056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:18.027979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:18.028069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:18.028090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.394 [2024-11-26 16:27:18.032889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.394 [2024-11-26 16:27:18.032966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.394 [2024-11-26 16:27:18.032989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.653 [2024-11-26 16:27:18.038388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.653 [2024-11-26 16:27:18.038506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.653 [2024-11-26 16:27:18.038547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.653 [2024-11-26 16:27:18.043441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.653 [2024-11-26 16:27:18.043572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.653 [2024-11-26 16:27:18.043596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.653 [2024-11-26 16:27:18.048432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.653 [2024-11-26 16:27:18.048535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.653 [2024-11-26 16:27:18.048557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.653 [2024-11-26 16:27:18.053174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.653 [2024-11-26 16:27:18.053273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.653 [2024-11-26 16:27:18.053293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.653 [2024-11-26 16:27:18.057793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.653 [2024-11-26 16:27:18.057908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.653 [2024-11-26 16:27:18.057927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.653 [2024-11-26 16:27:18.062278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.653 [2024-11-26 16:27:18.062391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.653 [2024-11-26 16:27:18.062411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.653 [2024-11-26 16:27:18.066776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.653 [2024-11-26 16:27:18.066874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.653 [2024-11-26 16:27:18.066893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.653 [2024-11-26 16:27:18.071234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.653 [2024-11-26 16:27:18.071333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.653 [2024-11-26 16:27:18.071353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.653 [2024-11-26 16:27:18.075776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.654 [2024-11-26 16:27:18.075873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.654 [2024-11-26 16:27:18.075893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:52.654 [2024-11-26 16:27:18.080177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.654 [2024-11-26 16:27:18.080272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.654 [2024-11-26 16:27:18.080292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:52.654 [2024-11-26 16:27:18.084599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.654 [2024-11-26 16:27:18.084705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.654 [2024-11-26 16:27:18.084749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:52.654 [2024-11-26 16:27:18.089109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a365e0) with pdu=0x2000166ff3c8 00:21:52.654 [2024-11-26 16:27:18.089225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.654 [2024-11-26 16:27:18.089244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:52.654 6606.50 IOPS, 825.81 MiB/s 00:21:52.654 Latency(us) 00:21:52.654 [2024-11-26T16:27:18.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.654 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:52.654 nvme0n1 : 2.00 6604.93 825.62 0.00 0.00 2417.14 1772.45 5898.24 00:21:52.654 [2024-11-26T16:27:18.307Z] =================================================================================================================== 00:21:52.654 [2024-11-26T16:27:18.307Z] Total : 6604.93 825.62 0.00 0.00 2417.14 1772.45 5898.24 00:21:52.654 { 00:21:52.654 "results": [ 00:21:52.654 { 00:21:52.654 "job": "nvme0n1", 00:21:52.654 "core_mask": "0x2", 00:21:52.654 "workload": "randwrite", 00:21:52.654 "status": "finished", 00:21:52.654 "queue_depth": 16, 00:21:52.654 "io_size": 131072, 00:21:52.654 "runtime": 2.002594, 00:21:52.654 "iops": 6604.933401378412, 00:21:52.654 "mibps": 825.6166751723015, 00:21:52.654 "io_failed": 0, 00:21:52.654 "io_timeout": 0, 00:21:52.654 "avg_latency_us": 2417.1416647765936, 00:21:52.654 "min_latency_us": 1772.4509090909091, 00:21:52.654 "max_latency_us": 5898.24 00:21:52.654 } 00:21:52.654 ], 00:21:52.654 "core_count": 1 00:21:52.654 } 00:21:52.654 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:52.654 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:52.654 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:52.654 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:52.654 | .driver_specific 00:21:52.654 | .nvme_error 00:21:52.654 | .status_code 00:21:52.654 | .command_transient_transport_error' 00:21:52.913 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 427 > 0 )) 00:21:52.913 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94977 00:21:52.913 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94977 ']' 00:21:52.913 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94977 00:21:52.913 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:52.913 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.913 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94977 00:21:52.913 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:52.913 killing process with pid 94977 00:21:52.913 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:52.913 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94977' 00:21:52.913 Received shutdown signal, test time was about 2.000000 seconds 00:21:52.913 00:21:52.913 Latency(us) 00:21:52.913 [2024-11-26T16:27:18.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.913 [2024-11-26T16:27:18.566Z] =================================================================================================================== 00:21:52.913 [2024-11-26T16:27:18.566Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:52.913 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94977 00:21:52.913 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94977 00:21:52.913 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94805 00:21:52.913 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94805 ']' 00:21:52.913 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94805 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94805 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:53.172 killing process with pid 94805 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94805' 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94805 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94805 00:21:53.172 00:21:53.172 real 0m14.476s 00:21:53.172 user 0m28.269s 00:21:53.172 sys 0m4.304s 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:53.172 ************************************ 00:21:53.172 END TEST nvmf_digest_error 00:21:53.172 ************************************ 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:53.172 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:53.172 rmmod nvme_tcp 00:21:53.172 rmmod nvme_fabrics 00:21:53.431 rmmod nvme_keyring 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 94805 ']' 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 94805 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 94805 ']' 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 94805 00:21:53.431 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94805) - No such process 00:21:53.431 Process with pid 94805 is not found 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 94805 is not found' 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:53.431 16:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:53.431 16:27:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:53.431 16:27:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:53.432 16:27:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:53.432 16:27:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.432 16:27:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.432 16:27:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:21:53.691 00:21:53.691 real 0m30.849s 00:21:53.691 user 0m58.617s 00:21:53.691 sys 0m9.021s 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.691 ************************************ 00:21:53.691 END TEST nvmf_digest 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:53.691 ************************************ 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.691 ************************************ 00:21:53.691 START TEST nvmf_host_multipath 00:21:53.691 ************************************ 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:53.691 * Looking for test storage... 00:21:53.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.691 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:53.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.952 --rc genhtml_branch_coverage=1 00:21:53.952 --rc genhtml_function_coverage=1 00:21:53.952 --rc genhtml_legend=1 00:21:53.952 --rc geninfo_all_blocks=1 00:21:53.952 --rc geninfo_unexecuted_blocks=1 00:21:53.952 00:21:53.952 ' 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:53.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.952 --rc genhtml_branch_coverage=1 00:21:53.952 --rc genhtml_function_coverage=1 00:21:53.952 --rc genhtml_legend=1 00:21:53.952 --rc geninfo_all_blocks=1 00:21:53.952 --rc geninfo_unexecuted_blocks=1 00:21:53.952 00:21:53.952 ' 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:53.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.952 --rc genhtml_branch_coverage=1 00:21:53.952 --rc genhtml_function_coverage=1 00:21:53.952 --rc genhtml_legend=1 00:21:53.952 --rc geninfo_all_blocks=1 00:21:53.952 --rc geninfo_unexecuted_blocks=1 00:21:53.952 00:21:53.952 ' 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:53.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.952 --rc genhtml_branch_coverage=1 00:21:53.952 --rc genhtml_function_coverage=1 00:21:53.952 --rc genhtml_legend=1 00:21:53.952 --rc geninfo_all_blocks=1 00:21:53.952 --rc geninfo_unexecuted_blocks=1 00:21:53.952 00:21:53.952 ' 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.952 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:53.953 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:53.953 Cannot find device "nvmf_init_br" 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:53.953 Cannot find device "nvmf_init_br2" 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:53.953 Cannot find device "nvmf_tgt_br" 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:53.953 Cannot find device "nvmf_tgt_br2" 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:53.953 Cannot find device "nvmf_init_br" 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:53.953 Cannot find device "nvmf_init_br2" 00:21:53.953 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:53.954 Cannot find device "nvmf_tgt_br" 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:53.954 Cannot find device "nvmf_tgt_br2" 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:53.954 Cannot find device "nvmf_br" 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:53.954 Cannot find device "nvmf_init_if" 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:53.954 Cannot find device "nvmf_init_if2" 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:53.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:53.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:53.954 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:54.214 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:54.214 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:21:54.214 00:21:54.214 --- 10.0.0.3 ping statistics --- 00:21:54.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.214 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:54.214 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:54.214 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:21:54.214 00:21:54.214 --- 10.0.0.4 ping statistics --- 00:21:54.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.214 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:54.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:21:54.214 00:21:54.214 --- 10.0.0.1 ping statistics --- 00:21:54.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.214 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:54.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:21:54.214 00:21:54.214 --- 10.0.0.2 ping statistics --- 00:21:54.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.214 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=95279 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 95279 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 95279 ']' 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.214 16:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:54.214 [2024-11-26 16:27:19.852337] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:21:54.214 [2024-11-26 16:27:19.852458] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.474 [2024-11-26 16:27:20.005476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:54.474 [2024-11-26 16:27:20.029623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.474 [2024-11-26 16:27:20.029691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.474 [2024-11-26 16:27:20.029705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.474 [2024-11-26 16:27:20.029714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.474 [2024-11-26 16:27:20.029723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.474 [2024-11-26 16:27:20.030649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.474 [2024-11-26 16:27:20.030662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.474 [2024-11-26 16:27:20.066815] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:54.474 16:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.474 16:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:54.474 16:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:54.474 16:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:54.474 16:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:54.733 16:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.733 16:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95279 00:21:54.733 16:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:54.992 [2024-11-26 16:27:20.427072] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.992 16:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:55.250 Malloc0 00:21:55.250 16:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:55.509 16:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:55.776 16:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:56.036 [2024-11-26 16:27:21.490641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:56.036 16:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:56.294 [2024-11-26 16:27:21.702734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:56.294 16:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95323 00:21:56.294 16:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:56.294 16:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:56.294 16:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95323 /var/tmp/bdevperf.sock 00:21:56.294 16:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 95323 ']' 00:21:56.294 16:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.294 16:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.294 16:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.294 16:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.294 16:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:57.230 16:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.230 16:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:57.230 16:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:57.489 16:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:57.748 Nvme0n1 00:21:57.748 16:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:58.006 Nvme0n1 00:21:58.006 16:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:58.006 16:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:59.382 16:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:59.383 16:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:59.383 16:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:59.641 16:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:59.641 16:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95368 00:21:59.641 16:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95279 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:59.641 16:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:06.202 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:06.202 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:06.202 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:06.202 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:06.202 Attaching 4 probes... 00:22:06.202 @path[10.0.0.3, 4421]: 20480 00:22:06.202 @path[10.0.0.3, 4421]: 20808 00:22:06.202 @path[10.0.0.3, 4421]: 20820 00:22:06.202 @path[10.0.0.3, 4421]: 20883 00:22:06.202 @path[10.0.0.3, 4421]: 20559 00:22:06.202 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:06.202 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:06.202 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:06.202 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:06.202 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:06.202 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:06.202 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95368 00:22:06.202 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:06.202 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:22:06.202 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:06.202 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:06.461 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:22:06.461 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95487 00:22:06.461 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95279 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:06.461 16:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:13.025 16:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:13.025 16:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:13.025 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:13.025 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:13.025 Attaching 4 probes... 00:22:13.025 @path[10.0.0.3, 4420]: 20259 00:22:13.025 @path[10.0.0.3, 4420]: 20590 00:22:13.025 @path[10.0.0.3, 4420]: 20776 00:22:13.025 @path[10.0.0.3, 4420]: 20885 00:22:13.025 @path[10.0.0.3, 4420]: 20912 00:22:13.025 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:13.025 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:13.025 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:13.025 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:13.025 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:13.025 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:13.025 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95487 00:22:13.025 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:13.025 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:13.025 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:13.025 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:13.283 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:13.283 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95601 00:22:13.283 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95279 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:13.283 16:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:19.848 16:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:19.848 16:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:19.848 16:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:19.848 16:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:19.848 Attaching 4 probes... 00:22:19.848 @path[10.0.0.3, 4421]: 15696 00:22:19.848 @path[10.0.0.3, 4421]: 20229 00:22:19.848 @path[10.0.0.3, 4421]: 20067 00:22:19.848 @path[10.0.0.3, 4421]: 20160 00:22:19.848 @path[10.0.0.3, 4421]: 20112 00:22:19.848 16:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:19.848 16:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:19.848 16:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:19.848 16:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:19.848 16:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:19.848 16:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:19.848 16:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95601 00:22:19.848 16:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:19.848 16:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:19.848 16:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:19.848 16:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:20.106 16:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:20.106 16:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95713 00:22:20.106 16:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95279 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:20.106 16:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:26.666 16:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:26.666 16:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:26.666 16:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:26.666 16:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:26.666 Attaching 4 probes... 00:22:26.666 00:22:26.666 00:22:26.666 00:22:26.666 00:22:26.666 00:22:26.666 16:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:26.666 16:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:26.666 16:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:26.666 16:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:26.666 16:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:26.666 16:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:26.666 16:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95713 00:22:26.666 16:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:26.666 16:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:26.666 16:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:26.666 16:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:26.666 16:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:26.666 16:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95832 00:22:26.666 16:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95279 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:26.666 16:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:33.232 16:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:33.232 16:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:33.232 16:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:33.232 16:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:33.232 Attaching 4 probes... 00:22:33.232 @path[10.0.0.3, 4421]: 19791 00:22:33.232 @path[10.0.0.3, 4421]: 20041 00:22:33.232 @path[10.0.0.3, 4421]: 20335 00:22:33.232 @path[10.0.0.3, 4421]: 20080 00:22:33.232 @path[10.0.0.3, 4421]: 19988 00:22:33.232 16:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:33.232 16:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:33.232 16:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:33.232 16:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:33.232 16:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:33.232 16:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:33.232 16:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95832 00:22:33.232 16:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:33.232 16:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:33.232 16:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:34.608 16:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:34.608 16:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95953 00:22:34.608 16:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95279 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:34.608 16:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:41.172 16:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:41.172 16:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:41.172 16:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:41.172 16:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:41.172 Attaching 4 probes... 00:22:41.172 @path[10.0.0.3, 4420]: 19404 00:22:41.172 @path[10.0.0.3, 4420]: 20009 00:22:41.172 @path[10.0.0.3, 4420]: 19972 00:22:41.172 @path[10.0.0.3, 4420]: 19968 00:22:41.172 @path[10.0.0.3, 4420]: 20157 00:22:41.172 16:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:41.172 16:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:41.172 16:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:41.172 16:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:41.172 16:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:41.172 16:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:41.172 16:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95953 00:22:41.172 16:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:41.172 16:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:41.172 [2024-11-26 16:28:06.429421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:41.172 16:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:41.172 16:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:47.794 16:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:47.794 16:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96132 00:22:47.794 16:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95279 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:47.794 16:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:54.371 16:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:54.371 16:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:54.371 Attaching 4 probes... 00:22:54.371 @path[10.0.0.3, 4421]: 19530 00:22:54.371 @path[10.0.0.3, 4421]: 19850 00:22:54.371 @path[10.0.0.3, 4421]: 20132 00:22:54.371 @path[10.0.0.3, 4421]: 20023 00:22:54.371 @path[10.0.0.3, 4421]: 19924 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96132 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95323 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 95323 ']' 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 95323 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95323 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:54.371 killing process with pid 95323 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95323' 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 95323 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 95323 00:22:54.371 { 00:22:54.371 "results": [ 00:22:54.371 { 00:22:54.371 "job": "Nvme0n1", 00:22:54.371 "core_mask": "0x4", 00:22:54.371 "workload": "verify", 00:22:54.371 "status": "terminated", 00:22:54.371 "verify_range": { 00:22:54.371 "start": 0, 00:22:54.371 "length": 16384 00:22:54.371 }, 00:22:54.371 "queue_depth": 128, 00:22:54.371 "io_size": 4096, 00:22:54.371 "runtime": 55.351757, 00:22:54.371 "iops": 8566.485071106234, 00:22:54.371 "mibps": 33.462832309008725, 00:22:54.371 "io_failed": 0, 00:22:54.371 "io_timeout": 0, 00:22:54.371 "avg_latency_us": 14912.318713219462, 00:22:54.371 "min_latency_us": 688.8727272727273, 00:22:54.371 "max_latency_us": 7015926.69090909 00:22:54.371 } 00:22:54.371 ], 00:22:54.371 "core_count": 1 00:22:54.371 } 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95323 00:22:54.371 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:54.371 [2024-11-26 16:27:21.764145] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:22:54.371 [2024-11-26 16:27:21.764239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95323 ] 00:22:54.371 [2024-11-26 16:27:21.911902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.371 [2024-11-26 16:27:21.935800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.371 [2024-11-26 16:27:21.970112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:54.371 Running I/O for 90 seconds... 00:22:54.371 8084.00 IOPS, 31.58 MiB/s [2024-11-26T16:28:20.024Z] 9061.50 IOPS, 35.40 MiB/s [2024-11-26T16:28:20.024Z] 9539.67 IOPS, 37.26 MiB/s [2024-11-26T16:28:20.024Z] 9756.75 IOPS, 38.11 MiB/s [2024-11-26T16:28:20.024Z] 9893.40 IOPS, 38.65 MiB/s [2024-11-26T16:28:20.024Z] 9981.83 IOPS, 38.99 MiB/s [2024-11-26T16:28:20.024Z] 10025.57 IOPS, 39.16 MiB/s [2024-11-26T16:28:20.024Z] 10045.38 IOPS, 39.24 MiB/s [2024-11-26T16:28:20.024Z] [2024-11-26 16:27:31.881922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.371 [2024-11-26 16:27:31.881974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.371 [2024-11-26 16:27:31.882057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.371 [2024-11-26 16:27:31.882109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.371 [2024-11-26 16:27:31.882142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.371 [2024-11-26 16:27:31.882174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.371 [2024-11-26 16:27:31.882207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.371 [2024-11-26 16:27:31.882239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.371 [2024-11-26 16:27:31.882272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.371 [2024-11-26 16:27:31.882309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.371 [2024-11-26 16:27:31.882366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.371 [2024-11-26 16:27:31.882421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.371 [2024-11-26 16:27:31.882453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.371 [2024-11-26 16:27:31.882486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.371 [2024-11-26 16:27:31.882532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.371 [2024-11-26 16:27:31.882563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.371 [2024-11-26 16:27:31.882594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.371 [2024-11-26 16:27:31.882625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.371 [2024-11-26 16:27:31.882659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.371 [2024-11-26 16:27:31.882690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:54.371 [2024-11-26 16:27:31.882709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.882722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.882740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.882753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.882771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.882784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.882827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.882842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.882860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.882874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.882893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.882906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.882925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.882939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.882957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.882971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.882990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.372 [2024-11-26 16:27:31.883450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.372 [2024-11-26 16:27:31.883482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.372 [2024-11-26 16:27:31.883516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.372 [2024-11-26 16:27:31.883547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.372 [2024-11-26 16:27:31.883579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.372 [2024-11-26 16:27:31.883611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.372 [2024-11-26 16:27:31.883650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.372 [2024-11-26 16:27:31.883683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.372 [2024-11-26 16:27:31.883941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.372 [2024-11-26 16:27:31.883974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.883993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.372 [2024-11-26 16:27:31.884006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.372 [2024-11-26 16:27:31.884025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.373 [2024-11-26 16:27:31.884518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.373 [2024-11-26 16:27:31.884550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.373 [2024-11-26 16:27:31.884582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.373 [2024-11-26 16:27:31.884614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.373 [2024-11-26 16:27:31.884646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.373 [2024-11-26 16:27:31.884678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.373 [2024-11-26 16:27:31.884710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.373 [2024-11-26 16:27:31.884786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.884970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.884984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.885004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.885018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.885038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.885053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.885087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.885100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.885120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.885148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.885166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.885180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.885198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.885211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.885230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.885244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.885262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.885276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.885295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.885309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.885328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.885341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.885359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.885377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:54.373 [2024-11-26 16:27:31.885411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.373 [2024-11-26 16:27:31.885429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:31.885466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:31.885499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:31.885532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.374 [2024-11-26 16:27:31.885564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.374 [2024-11-26 16:27:31.885596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.374 [2024-11-26 16:27:31.885629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.374 [2024-11-26 16:27:31.885661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.374 [2024-11-26 16:27:31.885693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.374 [2024-11-26 16:27:31.885725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.374 [2024-11-26 16:27:31.885757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.374 [2024-11-26 16:27:31.885789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.374 [2024-11-26 16:27:31.885830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.374 [2024-11-26 16:27:31.885862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.374 [2024-11-26 16:27:31.885894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.374 [2024-11-26 16:27:31.885926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.374 [2024-11-26 16:27:31.885958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.885979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.374 [2024-11-26 16:27:31.885992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.886011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.374 [2024-11-26 16:27:31.886025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.887279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.374 [2024-11-26 16:27:31.887309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.887335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:31.887366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.887388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:31.887402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.887421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:31.887435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.887453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:31.887467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.887503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:31.887519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.887538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:31.887552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.887571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:31.887584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.887618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:31.887637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.887657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:31.887672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.887691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:31.887704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.887723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:31.887736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:31.887755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:31.887769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:54.374 10034.22 IOPS, 39.20 MiB/s [2024-11-26T16:28:20.027Z] 10061.20 IOPS, 39.30 MiB/s [2024-11-26T16:28:20.027Z] 10080.36 IOPS, 39.38 MiB/s [2024-11-26T16:28:20.027Z] 10104.33 IOPS, 39.47 MiB/s [2024-11-26T16:28:20.027Z] 10126.46 IOPS, 39.56 MiB/s [2024-11-26T16:28:20.027Z] 10151.14 IOPS, 39.65 MiB/s [2024-11-26T16:28:20.027Z] [2024-11-26 16:27:38.417042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:38.417107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:38.417188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.374 [2024-11-26 16:27:38.417207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:54.374 [2024-11-26 16:27:38.417228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.375 [2024-11-26 16:27:38.417241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.375 [2024-11-26 16:27:38.417271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.375 [2024-11-26 16:27:38.417326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.375 [2024-11-26 16:27:38.417357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.375 [2024-11-26 16:27:38.417405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.375 [2024-11-26 16:27:38.417437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.417977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.417995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.418008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.418026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.418040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.418058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.418071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.418089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.418121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.418142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.418155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.418174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.418187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.418205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.375 [2024-11-26 16:27:38.418219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.418242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.375 [2024-11-26 16:27:38.418256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.418274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.375 [2024-11-26 16:27:38.418287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:54.375 [2024-11-26 16:27:38.418305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.418319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.418362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.418397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.418428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.418459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.418491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.418530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.418562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.418594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.418625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.418655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.418687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.418717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.418750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.376 [2024-11-26 16:27:38.418782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.376 [2024-11-26 16:27:38.418813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.376 [2024-11-26 16:27:38.418844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.376 [2024-11-26 16:27:38.418875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.376 [2024-11-26 16:27:38.418907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.376 [2024-11-26 16:27:38.418945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.376 [2024-11-26 16:27:38.418976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.418994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.376 [2024-11-26 16:27:38.419007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.419026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.419039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.419057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.419070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.419088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.419101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.419119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.419132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.419150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.419163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.419181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.419194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.419212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.419225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.419244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.419257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.419300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.419318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.419376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.419394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.419413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.419427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.419445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.419459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.419478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.419492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.419510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.376 [2024-11-26 16:27:38.419524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:54.376 [2024-11-26 16:27:38.419543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.377 [2024-11-26 16:27:38.419556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.419575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.377 [2024-11-26 16:27:38.419588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.419607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.419620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.419638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.419652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.419670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.419683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.419702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.419715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.419734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.419762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.419780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.419799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.419819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.419833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.419852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.419865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.419883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.419896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.419914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.419927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.419945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.419958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.419976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.419989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.420021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.420052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.420083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.420115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.420146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.420182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.420214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.420246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.420277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.420308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.420339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.420401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.377 [2024-11-26 16:27:38.420434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.377 [2024-11-26 16:27:38.420467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.377 [2024-11-26 16:27:38.420499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.377 [2024-11-26 16:27:38.420530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.377 [2024-11-26 16:27:38.420563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.377 [2024-11-26 16:27:38.420594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.377 [2024-11-26 16:27:38.420634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.377 [2024-11-26 16:27:38.420666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.420698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.420756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:54.377 [2024-11-26 16:27:38.420792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.377 [2024-11-26 16:27:38.420806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.420826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.378 [2024-11-26 16:27:38.420840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.420859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.378 [2024-11-26 16:27:38.420872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.420891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.378 [2024-11-26 16:27:38.420905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.420925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.378 [2024-11-26 16:27:38.420938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.421611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.378 [2024-11-26 16:27:38.421638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.421669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:38.421685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.421710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:38.421723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.421760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:38.421775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.421814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:38.421827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.421851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:38.421864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.421888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:38.421901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.421925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:38.421938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.421977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:38.421995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.422020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:38.422033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.422057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:38.422070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.422094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:38.422106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.422130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:38.422143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.422167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:38.422180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.422203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:38.422216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.422240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:38.422260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:38.422285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:38.422302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:54.378 9941.60 IOPS, 38.83 MiB/s [2024-11-26T16:28:20.031Z] 9505.94 IOPS, 37.13 MiB/s [2024-11-26T16:28:20.031Z] 9535.71 IOPS, 37.25 MiB/s [2024-11-26T16:28:20.031Z] 9562.83 IOPS, 37.35 MiB/s [2024-11-26T16:28:20.031Z] 9588.79 IOPS, 37.46 MiB/s [2024-11-26T16:28:20.031Z] 9619.75 IOPS, 37.58 MiB/s [2024-11-26T16:28:20.031Z] 9639.38 IOPS, 37.65 MiB/s [2024-11-26T16:28:20.031Z] [2024-11-26 16:27:45.504214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:45.504267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:45.504332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:45.504351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:45.504386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:45.504401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:45.504420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:45.504432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:45.504451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:45.504464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:45.504482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:45.504495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:45.504513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:45.504525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:45.504543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.378 [2024-11-26 16:27:45.504556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:45.504573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.378 [2024-11-26 16:27:45.504586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:45.504604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.378 [2024-11-26 16:27:45.504617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:45.504656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.378 [2024-11-26 16:27:45.504670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:54.378 [2024-11-26 16:27:45.504688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.378 [2024-11-26 16:27:45.504700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.504718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.504759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.504780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.504795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.504817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.504831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.504852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.504866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.504887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.504901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.504935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.504950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.504971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.504986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.379 [2024-11-26 16:27:45.505253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.379 [2024-11-26 16:27:45.505285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.379 [2024-11-26 16:27:45.505315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.379 [2024-11-26 16:27:45.505347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.379 [2024-11-26 16:27:45.505415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.379 [2024-11-26 16:27:45.505450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.379 [2024-11-26 16:27:45.505482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.379 [2024-11-26 16:27:45.505514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.505980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.505998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.506021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.506041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.379 [2024-11-26 16:27:45.506054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.506076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.379 [2024-11-26 16:27:45.506090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.506108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.379 [2024-11-26 16:27:45.506122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:54.379 [2024-11-26 16:27:45.506140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.379 [2024-11-26 16:27:45.506152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.506183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.506214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.506244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.506276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.506306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.506337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.506394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.506437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.506470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.506502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.506534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.506565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.506597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.380 [2024-11-26 16:27:45.506629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.380 [2024-11-26 16:27:45.506662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.380 [2024-11-26 16:27:45.506693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.380 [2024-11-26 16:27:45.506725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.380 [2024-11-26 16:27:45.506771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.380 [2024-11-26 16:27:45.506801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.380 [2024-11-26 16:27:45.506837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.380 [2024-11-26 16:27:45.506887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.506925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.506958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.506977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.506990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.507008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.507021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.507040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.507053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.507071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.507084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.507103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.507116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.507134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.507147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.507170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.507184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.507203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.507216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.507235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.507248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.507273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.380 [2024-11-26 16:27:45.507287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:54.380 [2024-11-26 16:27:45.507309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.507322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.507354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.507417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.507451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.507484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.507517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.507550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.507583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.507615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.507647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.507680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.507735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.507767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.507798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.507830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.507861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.507895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.507927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.507958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.507977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.507990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.508008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.508022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.508041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.508054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.508072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.508085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.508882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.508925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.508961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.508978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.509022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.509037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.509107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.509121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.509145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.509159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.509183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.509196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.509221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.509234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.509259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.509272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.509344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.509365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.509396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.509411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.509453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.509467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.509493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.509507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.509532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.381 [2024-11-26 16:27:45.509555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.509582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.509596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.509621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.509634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.509659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.381 [2024-11-26 16:27:45.509673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:54.381 [2024-11-26 16:27:45.509698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:45.509711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:45.509737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:45.509750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:45.509775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:45.509789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:45.509813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:45.509828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:45.509853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:45.509867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:54.382 9543.77 IOPS, 37.28 MiB/s [2024-11-26T16:28:20.035Z] 9128.83 IOPS, 35.66 MiB/s [2024-11-26T16:28:20.035Z] 8748.46 IOPS, 34.17 MiB/s [2024-11-26T16:28:20.035Z] 8398.52 IOPS, 32.81 MiB/s [2024-11-26T16:28:20.035Z] 8075.50 IOPS, 31.54 MiB/s [2024-11-26T16:28:20.035Z] 7776.41 IOPS, 30.38 MiB/s [2024-11-26T16:28:20.035Z] 7498.68 IOPS, 29.29 MiB/s [2024-11-26T16:28:20.035Z] 7314.83 IOPS, 28.57 MiB/s [2024-11-26T16:28:20.035Z] 7401.93 IOPS, 28.91 MiB/s [2024-11-26T16:28:20.035Z] 7487.03 IOPS, 29.25 MiB/s [2024-11-26T16:28:20.035Z] 7570.81 IOPS, 29.57 MiB/s [2024-11-26T16:28:20.035Z] 7645.64 IOPS, 29.87 MiB/s [2024-11-26T16:28:20.035Z] 7714.41 IOPS, 30.13 MiB/s [2024-11-26T16:28:20.035Z] 7776.06 IOPS, 30.38 MiB/s [2024-11-26T16:28:20.035Z] [2024-11-26 16:27:58.818979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.382 [2024-11-26 16:27:58.819028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.382 [2024-11-26 16:27:58.819148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.382 [2024-11-26 16:27:58.819206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.382 [2024-11-26 16:27:58.819239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.382 [2024-11-26 16:27:58.819271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.382 [2024-11-26 16:27:58.819301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.382 [2024-11-26 16:27:58.819332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.382 [2024-11-26 16:27:58.819381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.382 [2024-11-26 16:27:58.819412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.382 [2024-11-26 16:27:58.819442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.382 [2024-11-26 16:27:58.819473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.382 [2024-11-26 16:27:58.819504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.382 [2024-11-26 16:27:58.819534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.382 [2024-11-26 16:27:58.819564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.382 [2024-11-26 16:27:58.819603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.382 [2024-11-26 16:27:58.819636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:58.819667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:58.819702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:58.819733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:58.819764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:58.819796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:58.819827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:58.819858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:58.819889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:58.819920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:58.819951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.819969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:58.819983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.820008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:58.820022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.820041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:58.820053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.820072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:58.820084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.820102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:58.820115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:54.382 [2024-11-26 16:27:58.820134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.382 [2024-11-26 16:27:58.820147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.820247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.820277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.820302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.820328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.820370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.820414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.820441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.820479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.383 [2024-11-26 16:27:58.820506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.383 [2024-11-26 16:27:58.820532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.383 [2024-11-26 16:27:58.820559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.383 [2024-11-26 16:27:58.820586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.383 [2024-11-26 16:27:58.820612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.383 [2024-11-26 16:27:58.820639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.383 [2024-11-26 16:27:58.820665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.383 [2024-11-26 16:27:58.820691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.383 [2024-11-26 16:27:58.820726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.383 [2024-11-26 16:27:58.820772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.383 [2024-11-26 16:27:58.820799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.383 [2024-11-26 16:27:58.820826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.383 [2024-11-26 16:27:58.820860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.383 [2024-11-26 16:27:58.820888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.383 [2024-11-26 16:27:58.820915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.383 [2024-11-26 16:27:58.820942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.820969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.820983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.820996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.821025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.821037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.821051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.821063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.821077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.821089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.821118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.821130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.821143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.821155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.821168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.821180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.821193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.821210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.821225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.821238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.821251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.821263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.821276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.821288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.821302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.821314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.821327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.821339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.821352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.821364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.383 [2024-11-26 16:27:58.821377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.383 [2024-11-26 16:27:58.821401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.821446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.821472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.821498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.821523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.821549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.821583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.821609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.821635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.384 [2024-11-26 16:27:58.821662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.384 [2024-11-26 16:27:58.821689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.384 [2024-11-26 16:27:58.821715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.384 [2024-11-26 16:27:58.821740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.384 [2024-11-26 16:27:58.821766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.384 [2024-11-26 16:27:58.821808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.384 [2024-11-26 16:27:58.821833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.384 [2024-11-26 16:27:58.821858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.384 [2024-11-26 16:27:58.821884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.384 [2024-11-26 16:27:58.821914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.384 [2024-11-26 16:27:58.821941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.384 [2024-11-26 16:27:58.821966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.821979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.384 [2024-11-26 16:27:58.821991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.822004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.384 [2024-11-26 16:27:58.822016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.822029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.384 [2024-11-26 16:27:58.822041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.822054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.384 [2024-11-26 16:27:58.822067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.822080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.822092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.822106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.822118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.822131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.822144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.822157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.822169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.822182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.822194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.822207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.822219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.822232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.822252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.822266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.822278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.822292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.822304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.822317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.822329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.822342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.822354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.384 [2024-11-26 16:27:58.822367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.384 [2024-11-26 16:27:58.822393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.822407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.385 [2024-11-26 16:27:58.822419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.822433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.385 [2024-11-26 16:27:58.822445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.822458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.385 [2024-11-26 16:27:58.822470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.822483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5d80 is same with the state(6) to be set 00:22:54.385 [2024-11-26 16:27:58.822509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.822527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.822539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87880 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.822551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.822564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.822574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.822583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88336 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.822595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.822615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.822625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.822635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88344 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.822646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.822657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.822666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.822675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88352 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.822686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.822698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.822707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.822715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88360 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.822727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.822738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.822747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.822756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88368 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.822767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.822778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.822787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.822796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88376 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.822807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.822818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.822827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.822842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88384 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.822854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.822866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.822874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.822883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88392 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.822894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.822906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.822915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.822924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88400 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.822939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.822952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.822964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.822974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88408 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.822985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.822997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.823005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.823014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88416 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.823025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.823037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.823046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.823054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88424 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.823066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.823077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.823086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.823095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88432 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.823106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.823117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.823126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.823135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88440 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.823146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.823157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.823166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.823176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88448 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.823188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.823200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.385 [2024-11-26 16:27:58.823209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.385 [2024-11-26 16:27:58.823218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88456 len:8 PRP1 0x0 PRP2 0x0 00:22:54.385 [2024-11-26 16:27:58.823229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.824274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:54.385 [2024-11-26 16:27:58.824364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.385 [2024-11-26 16:27:58.824398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.385 [2024-11-26 16:27:58.824439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa71410 (9): Bad file descriptor 00:22:54.385 [2024-11-26 16:27:58.824945] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.385 [2024-11-26 16:27:58.824981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa71410 with addr=10.0.0.3, port=4421 00:22:54.385 [2024-11-26 16:27:58.825002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71410 is same with the state(6) to be set 00:22:54.385 [2024-11-26 16:27:58.825086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa71410 (9): Bad file descriptor 00:22:54.385 [2024-11-26 16:27:58.825146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:54.385 [2024-11-26 16:27:58.825170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:54.385 [2024-11-26 16:27:58.825191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:54.385 [2024-11-26 16:27:58.825210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:54.385 [2024-11-26 16:27:58.825224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:54.385 7835.92 IOPS, 30.61 MiB/s [2024-11-26T16:28:20.038Z] 7883.59 IOPS, 30.80 MiB/s [2024-11-26T16:28:20.038Z] 7939.29 IOPS, 31.01 MiB/s [2024-11-26T16:28:20.038Z] 7992.33 IOPS, 31.22 MiB/s [2024-11-26T16:28:20.038Z] 8041.73 IOPS, 31.41 MiB/s [2024-11-26T16:28:20.038Z] 8089.10 IOPS, 31.60 MiB/s [2024-11-26T16:28:20.038Z] 8134.02 IOPS, 31.77 MiB/s [2024-11-26T16:28:20.038Z] 8171.09 IOPS, 31.92 MiB/s [2024-11-26T16:28:20.039Z] 8212.48 IOPS, 32.08 MiB/s [2024-11-26T16:28:20.039Z] 8248.82 IOPS, 32.22 MiB/s [2024-11-26T16:28:20.039Z] [2024-11-26 16:28:08.871541] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:54.386 8287.04 IOPS, 32.37 MiB/s [2024-11-26T16:28:20.039Z] 8324.19 IOPS, 32.52 MiB/s [2024-11-26T16:28:20.039Z] 8360.25 IOPS, 32.66 MiB/s [2024-11-26T16:28:20.039Z] 8396.00 IOPS, 32.80 MiB/s [2024-11-26T16:28:20.039Z] 8421.84 IOPS, 32.90 MiB/s [2024-11-26T16:28:20.039Z] 8451.29 IOPS, 33.01 MiB/s [2024-11-26T16:28:20.039Z] 8479.62 IOPS, 33.12 MiB/s [2024-11-26T16:28:20.039Z] 8508.75 IOPS, 33.24 MiB/s [2024-11-26T16:28:20.039Z] 8535.93 IOPS, 33.34 MiB/s [2024-11-26T16:28:20.039Z] 8562.25 IOPS, 33.45 MiB/s [2024-11-26T16:28:20.039Z] Received shutdown signal, test time was about 55.352480 seconds 00:22:54.386 00:22:54.386 Latency(us) 00:22:54.386 [2024-11-26T16:28:20.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.386 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:54.386 Verification LBA range: start 0x0 length 0x4000 00:22:54.386 Nvme0n1 : 55.35 8566.49 33.46 0.00 0.00 14912.32 688.87 7015926.69 00:22:54.386 [2024-11-26T16:28:20.039Z] =================================================================================================================== 00:22:54.386 [2024-11-26T16:28:20.039Z] Total : 8566.49 33.46 0.00 0.00 14912.32 688.87 7015926.69 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.386 rmmod nvme_tcp 00:22:54.386 rmmod nvme_fabrics 00:22:54.386 rmmod nvme_keyring 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 95279 ']' 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 95279 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 95279 ']' 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 95279 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95279 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:54.386 killing process with pid 95279 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95279' 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 95279 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 95279 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:22:54.386 00:22:54.386 real 1m0.828s 00:22:54.386 user 2m49.203s 00:22:54.386 sys 0m17.747s 00:22:54.386 ************************************ 00:22:54.386 END TEST nvmf_host_multipath 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.386 16:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:54.386 ************************************ 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.649 ************************************ 00:22:54.649 START TEST nvmf_timeout 00:22:54.649 ************************************ 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:54.649 * Looking for test storage... 00:22:54.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:54.649 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:54.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.650 --rc genhtml_branch_coverage=1 00:22:54.650 --rc genhtml_function_coverage=1 00:22:54.650 --rc genhtml_legend=1 00:22:54.650 --rc geninfo_all_blocks=1 00:22:54.650 --rc geninfo_unexecuted_blocks=1 00:22:54.650 00:22:54.650 ' 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:54.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.650 --rc genhtml_branch_coverage=1 00:22:54.650 --rc genhtml_function_coverage=1 00:22:54.650 --rc genhtml_legend=1 00:22:54.650 --rc geninfo_all_blocks=1 00:22:54.650 --rc geninfo_unexecuted_blocks=1 00:22:54.650 00:22:54.650 ' 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:54.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.650 --rc genhtml_branch_coverage=1 00:22:54.650 --rc genhtml_function_coverage=1 00:22:54.650 --rc genhtml_legend=1 00:22:54.650 --rc geninfo_all_blocks=1 00:22:54.650 --rc geninfo_unexecuted_blocks=1 00:22:54.650 00:22:54.650 ' 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:54.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.650 --rc genhtml_branch_coverage=1 00:22:54.650 --rc genhtml_function_coverage=1 00:22:54.650 --rc genhtml_legend=1 00:22:54.650 --rc geninfo_all_blocks=1 00:22:54.650 --rc geninfo_unexecuted_blocks=1 00:22:54.650 00:22:54.650 ' 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:54.650 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:54.650 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:54.651 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:54.651 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:54.651 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:54.651 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.651 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:54.651 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:54.651 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:54.651 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:54.651 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:54.651 Cannot find device "nvmf_init_br" 00:22:54.651 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:54.651 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:54.651 Cannot find device "nvmf_init_br2" 00:22:54.651 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:54.651 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:54.651 Cannot find device "nvmf_tgt_br" 00:22:54.651 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:22:54.651 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:54.909 Cannot find device "nvmf_tgt_br2" 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:54.910 Cannot find device "nvmf_init_br" 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:54.910 Cannot find device "nvmf_init_br2" 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:54.910 Cannot find device "nvmf_tgt_br" 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:54.910 Cannot find device "nvmf_tgt_br2" 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:54.910 Cannot find device "nvmf_br" 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:54.910 Cannot find device "nvmf_init_if" 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:54.910 Cannot find device "nvmf_init_if2" 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:54.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:54.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:54.910 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:55.169 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:55.169 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:55.169 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:55.170 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:55.170 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:22:55.170 00:22:55.170 --- 10.0.0.3 ping statistics --- 00:22:55.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.170 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:55.170 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:55.170 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:22:55.170 00:22:55.170 --- 10.0.0.4 ping statistics --- 00:22:55.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.170 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:55.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:22:55.170 00:22:55.170 --- 10.0.0.1 ping statistics --- 00:22:55.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.170 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:55.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:22:55.170 00:22:55.170 --- 10.0.0.2 ping statistics --- 00:22:55.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.170 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=96491 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 96491 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96491 ']' 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.170 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:55.170 [2024-11-26 16:28:20.739145] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:22:55.170 [2024-11-26 16:28:20.739241] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.429 [2024-11-26 16:28:20.886153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:55.429 [2024-11-26 16:28:20.909795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.429 [2024-11-26 16:28:20.909863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.429 [2024-11-26 16:28:20.909877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.429 [2024-11-26 16:28:20.909887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.429 [2024-11-26 16:28:20.909897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.429 [2024-11-26 16:28:20.910826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.429 [2024-11-26 16:28:20.910840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.429 [2024-11-26 16:28:20.946374] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:55.429 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.429 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:55.429 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.429 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:55.429 16:28:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:55.429 16:28:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.429 16:28:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:55.429 16:28:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:55.687 [2024-11-26 16:28:21.325337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.946 16:28:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:55.946 Malloc0 00:22:56.204 16:28:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.463 16:28:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:56.721 16:28:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:56.721 [2024-11-26 16:28:22.326733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:56.721 16:28:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96528 00:22:56.721 16:28:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:56.721 16:28:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96528 /var/tmp/bdevperf.sock 00:22:56.721 16:28:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96528 ']' 00:22:56.721 16:28:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.721 16:28:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.721 16:28:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.721 16:28:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.721 16:28:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:56.980 [2024-11-26 16:28:22.392268] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:22:56.981 [2024-11-26 16:28:22.392383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96528 ] 00:22:56.981 [2024-11-26 16:28:22.536217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.981 [2024-11-26 16:28:22.556336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.981 [2024-11-26 16:28:22.585133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:57.239 16:28:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.239 16:28:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:57.239 16:28:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:57.499 16:28:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:57.758 NVMe0n1 00:22:57.758 16:28:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96543 00:22:57.758 16:28:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:57.758 16:28:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:57.758 Running I/O for 10 seconds... 00:22:58.694 16:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:58.956 7957.00 IOPS, 31.08 MiB/s [2024-11-26T16:28:24.609Z] [2024-11-26 16:28:24.469155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.956 [2024-11-26 16:28:24.469720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.469996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.470004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.470012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.470020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.470027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.470035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.470043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.470051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.470059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.470067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.470074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.470082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.470090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.470097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.470105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd3e0 is same with the state(6) to be set 00:22:58.957 [2024-11-26 16:28:24.470166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.957 [2024-11-26 16:28:24.470196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.957 [2024-11-26 16:28:24.470216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.957 [2024-11-26 16:28:24.470226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.957 [2024-11-26 16:28:24.470237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.957 [2024-11-26 16:28:24.470246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.957 [2024-11-26 16:28:24.470256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.957 [2024-11-26 16:28:24.470265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.957 [2024-11-26 16:28:24.470276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.957 [2024-11-26 16:28:24.470284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.957 [2024-11-26 16:28:24.470295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.957 [2024-11-26 16:28:24.470303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.957 [2024-11-26 16:28:24.470313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.957 [2024-11-26 16:28:24.470322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.957 [2024-11-26 16:28:24.470333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.957 [2024-11-26 16:28:24.470341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.957 [2024-11-26 16:28:24.470352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.957 [2024-11-26 16:28:24.470360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.957 [2024-11-26 16:28:24.470370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.957 [2024-11-26 16:28:24.470379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.957 [2024-11-26 16:28:24.470405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.957 [2024-11-26 16:28:24.470415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.957 [2024-11-26 16:28:24.470426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.957 [2024-11-26 16:28:24.470435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.957 [2024-11-26 16:28:24.470445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.957 [2024-11-26 16:28:24.470453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.957 [2024-11-26 16:28:24.470464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.957 [2024-11-26 16:28:24.470472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.957 [2024-11-26 16:28:24.470483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.957 [2024-11-26 16:28:24.470491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.957 [2024-11-26 16:28:24.470502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.957 [2024-11-26 16:28:24.470511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.470988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.470997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.471008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.471017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.471028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.471037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.471047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.471056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.471067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.471076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.471086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.471095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.471105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.471114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.471124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.471133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.471143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.471152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.471163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.471171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.471181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.471190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.471200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.471211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.471222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.471247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.471258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.471267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.471278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.958 [2024-11-26 16:28:24.471287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.958 [2024-11-26 16:28:24.471298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.471988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.471996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.472006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.472015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.472025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.472034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.472044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.472070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.472081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.472089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.959 [2024-11-26 16:28:24.472100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.959 [2024-11-26 16:28:24.472109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.960 [2024-11-26 16:28:24.472472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.960 [2024-11-26 16:28:24.472493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.960 [2024-11-26 16:28:24.472512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.960 [2024-11-26 16:28:24.472532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.960 [2024-11-26 16:28:24.472555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.960 [2024-11-26 16:28:24.472575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.960 [2024-11-26 16:28:24.472595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.960 [2024-11-26 16:28:24.472616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.960 [2024-11-26 16:28:24.472636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.960 [2024-11-26 16:28:24.472657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.960 [2024-11-26 16:28:24.472677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.960 [2024-11-26 16:28:24.472697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.960 [2024-11-26 16:28:24.472728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.960 [2024-11-26 16:28:24.472759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:58.960 [2024-11-26 16:28:24.472779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.960 [2024-11-26 16:28:24.472800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.960 [2024-11-26 16:28:24.472810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa12d40 is same with the state(6) to be set 00:22:58.960 [2024-11-26 16:28:24.472822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.960 [2024-11-26 16:28:24.472830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.961 [2024-11-26 16:28:24.472838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73888 len:8 PRP1 0x0 PRP2 0x0 00:22:58.961 [2024-11-26 16:28:24.472847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.961 [2024-11-26 16:28:24.473151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:58.961 [2024-11-26 16:28:24.473239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f1170 (9): Bad file descriptor 00:22:58.961 [2024-11-26 16:28:24.473338] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.961 [2024-11-26 16:28:24.473383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f1170 with addr=10.0.0.3, port=4420 00:22:58.961 [2024-11-26 16:28:24.473398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1170 is same with the state(6) to be set 00:22:58.961 [2024-11-26 16:28:24.473417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f1170 (9): Bad file descriptor 00:22:58.961 [2024-11-26 16:28:24.473433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:58.961 [2024-11-26 16:28:24.473442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:58.961 [2024-11-26 16:28:24.473452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:58.961 [2024-11-26 16:28:24.473462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:58.961 [2024-11-26 16:28:24.473472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:58.961 16:28:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:23:00.834 4562.00 IOPS, 17.82 MiB/s [2024-11-26T16:28:26.487Z] 3041.33 IOPS, 11.88 MiB/s [2024-11-26T16:28:26.487Z] [2024-11-26 16:28:26.473570] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.834 [2024-11-26 16:28:26.473650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f1170 with addr=10.0.0.3, port=4420 00:23:00.834 [2024-11-26 16:28:26.473665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1170 is same with the state(6) to be set 00:23:00.834 [2024-11-26 16:28:26.473687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f1170 (9): Bad file descriptor 00:23:00.834 [2024-11-26 16:28:26.473715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:00.834 [2024-11-26 16:28:26.473725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:00.834 [2024-11-26 16:28:26.473734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:00.834 [2024-11-26 16:28:26.473745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:00.834 [2024-11-26 16:28:26.473754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:01.093 16:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:23:01.093 16:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:01.093 16:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:01.093 16:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:23:01.093 16:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:23:01.094 16:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:01.094 16:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:01.352 16:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:23:01.352 16:28:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:23:02.856 2281.00 IOPS, 8.91 MiB/s [2024-11-26T16:28:28.509Z] 1824.80 IOPS, 7.13 MiB/s [2024-11-26T16:28:28.509Z] [2024-11-26 16:28:28.473961] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.856 [2024-11-26 16:28:28.474040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f1170 with addr=10.0.0.3, port=4420 00:23:02.856 [2024-11-26 16:28:28.474055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f1170 is same with the state(6) to be set 00:23:02.856 [2024-11-26 16:28:28.474078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f1170 (9): Bad file descriptor 00:23:02.856 [2024-11-26 16:28:28.474096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:02.856 [2024-11-26 16:28:28.474105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:02.856 [2024-11-26 16:28:28.474115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:02.856 [2024-11-26 16:28:28.474125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:02.856 [2024-11-26 16:28:28.474135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:04.726 1520.67 IOPS, 5.94 MiB/s [2024-11-26T16:28:30.638Z] 1303.43 IOPS, 5.09 MiB/s [2024-11-26T16:28:30.638Z] [2024-11-26 16:28:30.474191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:04.985 [2024-11-26 16:28:30.474254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:04.985 [2024-11-26 16:28:30.474280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:04.985 [2024-11-26 16:28:30.474290] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:23:04.985 [2024-11-26 16:28:30.474300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:05.921 1140.50 IOPS, 4.46 MiB/s 00:23:05.921 Latency(us) 00:23:05.921 [2024-11-26T16:28:31.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.921 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:05.921 Verification LBA range: start 0x0 length 0x4000 00:23:05.921 NVMe0n1 : 8.16 1117.50 4.37 15.68 0.00 112748.26 3395.96 7015926.69 00:23:05.921 [2024-11-26T16:28:31.574Z] =================================================================================================================== 00:23:05.921 [2024-11-26T16:28:31.574Z] Total : 1117.50 4.37 15.68 0.00 112748.26 3395.96 7015926.69 00:23:05.921 { 00:23:05.921 "results": [ 00:23:05.921 { 00:23:05.921 "job": "NVMe0n1", 00:23:05.921 "core_mask": "0x4", 00:23:05.921 "workload": "verify", 00:23:05.921 "status": "finished", 00:23:05.921 "verify_range": { 00:23:05.921 "start": 0, 00:23:05.921 "length": 16384 00:23:05.921 }, 00:23:05.921 "queue_depth": 128, 00:23:05.921 "io_size": 4096, 00:23:05.921 "runtime": 8.164636, 00:23:05.921 "iops": 1117.5023601787025, 00:23:05.921 "mibps": 4.3652435944480565, 00:23:05.921 "io_failed": 128, 00:23:05.921 "io_timeout": 0, 00:23:05.921 "avg_latency_us": 112748.25913296388, 00:23:05.921 "min_latency_us": 3395.9563636363637, 00:23:05.921 "max_latency_us": 7015926.69090909 00:23:05.921 } 00:23:05.921 ], 00:23:05.921 "core_count": 1 00:23:05.921 } 00:23:06.488 16:28:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:23:06.488 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:06.488 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:06.746 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:23:06.746 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:23:06.746 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:06.746 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:07.005 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:23:07.005 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 96543 00:23:07.005 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96528 00:23:07.005 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96528 ']' 00:23:07.005 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96528 00:23:07.005 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:07.005 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.005 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96528 00:23:07.005 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:07.005 killing process with pid 96528 00:23:07.005 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:07.005 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96528' 00:23:07.005 Received shutdown signal, test time was about 9.217275 seconds 00:23:07.005 00:23:07.005 Latency(us) 00:23:07.005 [2024-11-26T16:28:32.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.005 [2024-11-26T16:28:32.658Z] =================================================================================================================== 00:23:07.005 [2024-11-26T16:28:32.658Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:07.005 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96528 00:23:07.005 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96528 00:23:07.005 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:07.264 [2024-11-26 16:28:32.846324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:07.264 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96666 00:23:07.264 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:07.264 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96666 /var/tmp/bdevperf.sock 00:23:07.264 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96666 ']' 00:23:07.264 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.264 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.264 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.264 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.264 16:28:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:07.522 [2024-11-26 16:28:32.917597] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:23:07.522 [2024-11-26 16:28:32.917707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96666 ] 00:23:07.522 [2024-11-26 16:28:33.062662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.522 [2024-11-26 16:28:33.081779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.522 [2024-11-26 16:28:33.110961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:08.458 16:28:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.458 16:28:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:08.458 16:28:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:08.458 16:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:23:09.025 NVMe0n1 00:23:09.025 16:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96688 00:23:09.025 16:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:09.025 16:28:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:23:09.025 Running I/O for 10 seconds... 00:23:09.961 16:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:10.222 8084.00 IOPS, 31.58 MiB/s [2024-11-26T16:28:35.875Z] [2024-11-26 16:28:35.653539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.653987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.653997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.654005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.654016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.654024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.654034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.654042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.654052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.654061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.654071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.654079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.654089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.654097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.654107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.654115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.654125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.222 [2024-11-26 16:28:35.654133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.222 [2024-11-26 16:28:35.654143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.223 [2024-11-26 16:28:35.654892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.223 [2024-11-26 16:28:35.654902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.224 [2024-11-26 16:28:35.654911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.654921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.224 [2024-11-26 16:28:35.654929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.654939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.224 [2024-11-26 16:28:35.654948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.654958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.224 [2024-11-26 16:28:35.654966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.654976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.224 [2024-11-26 16:28:35.654984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.654995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.224 [2024-11-26 16:28:35.655003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.224 [2024-11-26 16:28:35.655021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.224 [2024-11-26 16:28:35.655040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.224 [2024-11-26 16:28:35.655059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.224 [2024-11-26 16:28:35.655077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.224 [2024-11-26 16:28:35.655096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.224 [2024-11-26 16:28:35.655404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.224 [2024-11-26 16:28:35.655423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.224 [2024-11-26 16:28:35.655570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.224 [2024-11-26 16:28:35.655644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.224 [2024-11-26 16:28:35.655655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.655985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.655995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.225 [2024-11-26 16:28:35.656004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.656013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdbfc0 is same with the state(6) to be set 00:23:10.225 [2024-11-26 16:28:35.656025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.225 [2024-11-26 16:28:35.656033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.225 [2024-11-26 16:28:35.656040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71184 len:8 PRP1 0x0 PRP2 0x0 00:23:10.225 [2024-11-26 16:28:35.656048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.225 [2024-11-26 16:28:35.656319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:10.225 [2024-11-26 16:28:35.656412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cba3f0 (9): Bad file descriptor 00:23:10.225 [2024-11-26 16:28:35.656531] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.225 [2024-11-26 16:28:35.656552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cba3f0 with addr=10.0.0.3, port=4420 00:23:10.225 [2024-11-26 16:28:35.656562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cba3f0 is same with the state(6) to be set 00:23:10.225 [2024-11-26 16:28:35.656580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cba3f0 (9): Bad file descriptor 00:23:10.225 [2024-11-26 16:28:35.656596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:10.225 [2024-11-26 16:28:35.656617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:10.225 [2024-11-26 16:28:35.656628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:10.225 [2024-11-26 16:28:35.656639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:10.225 [2024-11-26 16:28:35.656651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:10.225 16:28:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:23:11.159 4426.50 IOPS, 17.29 MiB/s [2024-11-26T16:28:36.812Z] [2024-11-26 16:28:36.656753] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.159 [2024-11-26 16:28:36.656972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cba3f0 with addr=10.0.0.3, port=4420 00:23:11.159 [2024-11-26 16:28:36.656996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cba3f0 is same with the state(6) to be set 00:23:11.159 [2024-11-26 16:28:36.657023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cba3f0 (9): Bad file descriptor 00:23:11.159 [2024-11-26 16:28:36.657056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:11.159 [2024-11-26 16:28:36.657080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:11.159 [2024-11-26 16:28:36.657098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:11.159 [2024-11-26 16:28:36.657108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:11.159 [2024-11-26 16:28:36.657118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:11.159 16:28:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:11.417 [2024-11-26 16:28:36.941150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:11.417 16:28:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 96688 00:23:12.243 2951.00 IOPS, 11.53 MiB/s [2024-11-26T16:28:37.896Z] [2024-11-26 16:28:37.674594] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:14.115 2213.25 IOPS, 8.65 MiB/s [2024-11-26T16:28:40.704Z] 3652.00 IOPS, 14.27 MiB/s [2024-11-26T16:28:41.641Z] 4820.67 IOPS, 18.83 MiB/s [2024-11-26T16:28:42.602Z] 5680.57 IOPS, 22.19 MiB/s [2024-11-26T16:28:43.980Z] 6305.50 IOPS, 24.63 MiB/s [2024-11-26T16:28:44.916Z] 6791.56 IOPS, 26.53 MiB/s [2024-11-26T16:28:44.916Z] 7186.80 IOPS, 28.07 MiB/s 00:23:19.263 Latency(us) 00:23:19.263 [2024-11-26T16:28:44.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.263 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:19.263 Verification LBA range: start 0x0 length 0x4000 00:23:19.263 NVMe0n1 : 10.01 7192.41 28.10 0.00 0.00 17762.56 2293.76 3019898.88 00:23:19.263 [2024-11-26T16:28:44.916Z] =================================================================================================================== 00:23:19.263 [2024-11-26T16:28:44.916Z] Total : 7192.41 28.10 0.00 0.00 17762.56 2293.76 3019898.88 00:23:19.263 { 00:23:19.263 "results": [ 00:23:19.263 { 00:23:19.263 "job": "NVMe0n1", 00:23:19.263 "core_mask": "0x4", 00:23:19.263 "workload": "verify", 00:23:19.263 "status": "finished", 00:23:19.263 "verify_range": { 00:23:19.263 "start": 0, 00:23:19.263 "length": 16384 00:23:19.263 }, 00:23:19.263 "queue_depth": 128, 00:23:19.263 "io_size": 4096, 00:23:19.263 "runtime": 10.009997, 00:23:19.263 "iops": 7192.40974797495, 00:23:19.263 "mibps": 28.095350578027148, 00:23:19.263 "io_failed": 0, 00:23:19.263 "io_timeout": 0, 00:23:19.263 "avg_latency_us": 17762.557976453234, 00:23:19.263 "min_latency_us": 2293.76, 00:23:19.263 "max_latency_us": 3019898.88 00:23:19.263 } 00:23:19.263 ], 00:23:19.263 "core_count": 1 00:23:19.263 } 00:23:19.263 16:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96794 00:23:19.263 16:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:19.263 16:28:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:23:19.263 Running I/O for 10 seconds... 00:23:20.202 16:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:20.202 7957.00 IOPS, 31.08 MiB/s [2024-11-26T16:28:45.855Z] [2024-11-26 16:28:45.839910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.202 [2024-11-26 16:28:45.839958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.839994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.202 [2024-11-26 16:28:45.840004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.202 [2024-11-26 16:28:45.840021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.202 [2024-11-26 16:28:45.840039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.202 [2024-11-26 16:28:45.840056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.202 [2024-11-26 16:28:45.840073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.202 [2024-11-26 16:28:45.840091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.202 [2024-11-26 16:28:45.840108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.202 [2024-11-26 16:28:45.840126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.202 [2024-11-26 16:28:45.840143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.202 [2024-11-26 16:28:45.840160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.202 [2024-11-26 16:28:45.840177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.202 [2024-11-26 16:28:45.840194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.202 [2024-11-26 16:28:45.840218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.202 [2024-11-26 16:28:45.840235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.202 [2024-11-26 16:28:45.840253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.202 [2024-11-26 16:28:45.840271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.202 [2024-11-26 16:28:45.840290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.202 [2024-11-26 16:28:45.840307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.202 [2024-11-26 16:28:45.840324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.202 [2024-11-26 16:28:45.840334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.202 [2024-11-26 16:28:45.840342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.203 [2024-11-26 16:28:45.840573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.203 [2024-11-26 16:28:45.840591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.203 [2024-11-26 16:28:45.840828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.840983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.840992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.841004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.841013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.841025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.841035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.841060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.841069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.203 [2024-11-26 16:28:45.841080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-11-26 16:28:45.841090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-11-26 16:28:45.841712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.204 [2024-11-26 16:28:45.841723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.841731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.841742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.841751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.841762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.841771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.841781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.841790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.841800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.841809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.841820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.841829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.841839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.841862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.841873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.841881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.841891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.841899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.841910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.841918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.841928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.841937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.841947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.841956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.841966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.841975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.841986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.841994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.205 [2024-11-26 16:28:45.842370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.205 [2024-11-26 16:28:45.842390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.206 [2024-11-26 16:28:45.842402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.206 [2024-11-26 16:28:45.842410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.206 [2024-11-26 16:28:45.842421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.206 [2024-11-26 16:28:45.842429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.206 [2024-11-26 16:28:45.842439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.206 [2024-11-26 16:28:45.842448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.206 [2024-11-26 16:28:45.842475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.206 [2024-11-26 16:28:45.842484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.206 [2024-11-26 16:28:45.842494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.206 [2024-11-26 16:28:45.842503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.206 [2024-11-26 16:28:45.842514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.206 [2024-11-26 16:28:45.842522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.206 [2024-11-26 16:28:45.842533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.206 [2024-11-26 16:28:45.842541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.206 [2024-11-26 16:28:45.842552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.206 [2024-11-26 16:28:45.842561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.206 [2024-11-26 16:28:45.842572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.206 [2024-11-26 16:28:45.842581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.206 [2024-11-26 16:28:45.842591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.206 [2024-11-26 16:28:45.842600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.206 [2024-11-26 16:28:45.842611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.206 [2024-11-26 16:28:45.842621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.206 [2024-11-26 16:28:45.842632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cee430 is same with the state(6) to be set 00:23:20.206 [2024-11-26 16:28:45.842644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.206 [2024-11-26 16:28:45.842651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.206 [2024-11-26 16:28:45.842659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72720 len:8 PRP1 0x0 PRP2 0x0 00:23:20.206 [2024-11-26 16:28:45.842669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.206 [2024-11-26 16:28:45.842928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:20.206 [2024-11-26 16:28:45.843006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cba3f0 (9): Bad file descriptor 00:23:20.206 [2024-11-26 16:28:45.843118] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.206 [2024-11-26 16:28:45.843140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cba3f0 with addr=10.0.0.3, port=4420 00:23:20.206 [2024-11-26 16:28:45.843150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cba3f0 is same with the state(6) to be set 00:23:20.206 [2024-11-26 16:28:45.843167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cba3f0 (9): Bad file descriptor 00:23:20.206 [2024-11-26 16:28:45.843183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:20.206 [2024-11-26 16:28:45.843192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:20.206 [2024-11-26 16:28:45.843202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:20.206 [2024-11-26 16:28:45.843227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:20.206 [2024-11-26 16:28:45.843239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:20.465 16:28:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:21.401 4490.50 IOPS, 17.54 MiB/s [2024-11-26T16:28:47.054Z] [2024-11-26 16:28:46.843327] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.401 [2024-11-26 16:28:46.843412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cba3f0 with addr=10.0.0.3, port=4420 00:23:21.401 [2024-11-26 16:28:46.843428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cba3f0 is same with the state(6) to be set 00:23:21.401 [2024-11-26 16:28:46.843447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cba3f0 (9): Bad file descriptor 00:23:21.401 [2024-11-26 16:28:46.843463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:21.401 [2024-11-26 16:28:46.843472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:21.401 [2024-11-26 16:28:46.843481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:21.401 [2024-11-26 16:28:46.843507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:21.401 [2024-11-26 16:28:46.843517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:22.336 2993.67 IOPS, 11.69 MiB/s [2024-11-26T16:28:47.989Z] [2024-11-26 16:28:47.843592] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.336 [2024-11-26 16:28:47.843647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cba3f0 with addr=10.0.0.3, port=4420 00:23:22.336 [2024-11-26 16:28:47.843660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cba3f0 is same with the state(6) to be set 00:23:22.336 [2024-11-26 16:28:47.843679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cba3f0 (9): Bad file descriptor 00:23:22.336 [2024-11-26 16:28:47.843695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:22.336 [2024-11-26 16:28:47.843703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:22.336 [2024-11-26 16:28:47.843712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:22.336 [2024-11-26 16:28:47.843721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:22.336 [2024-11-26 16:28:47.843730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:23.271 2245.25 IOPS, 8.77 MiB/s [2024-11-26T16:28:48.924Z] [2024-11-26 16:28:48.846723] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.271 [2024-11-26 16:28:48.846791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cba3f0 with addr=10.0.0.3, port=4420 00:23:23.271 [2024-11-26 16:28:48.846804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cba3f0 is same with the state(6) to be set 00:23:23.271 [2024-11-26 16:28:48.847013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cba3f0 (9): Bad file descriptor 00:23:23.271 [2024-11-26 16:28:48.847221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:23.271 [2024-11-26 16:28:48.847232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:23.271 [2024-11-26 16:28:48.847241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:23.272 [2024-11-26 16:28:48.847249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:23.272 [2024-11-26 16:28:48.847257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:23.272 16:28:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:23.530 [2024-11-26 16:28:49.113531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:23.530 16:28:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 96794 00:23:24.355 1796.20 IOPS, 7.02 MiB/s [2024-11-26T16:28:50.008Z] [2024-11-26 16:28:49.874662] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:23:26.229 2966.00 IOPS, 11.59 MiB/s [2024-11-26T16:28:52.817Z] 4081.71 IOPS, 15.94 MiB/s [2024-11-26T16:28:53.752Z] 4917.50 IOPS, 19.21 MiB/s [2024-11-26T16:28:55.127Z] 5582.67 IOPS, 21.81 MiB/s [2024-11-26T16:28:55.127Z] 6104.70 IOPS, 23.85 MiB/s 00:23:29.474 Latency(us) 00:23:29.474 [2024-11-26T16:28:55.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.474 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:29.474 Verification LBA range: start 0x0 length 0x4000 00:23:29.474 NVMe0n1 : 10.01 6111.91 23.87 4152.80 0.00 12446.60 666.53 3019898.88 00:23:29.474 [2024-11-26T16:28:55.127Z] =================================================================================================================== 00:23:29.474 [2024-11-26T16:28:55.127Z] Total : 6111.91 23.87 4152.80 0.00 12446.60 0.00 3019898.88 00:23:29.474 { 00:23:29.474 "results": [ 00:23:29.474 { 00:23:29.474 "job": "NVMe0n1", 00:23:29.474 "core_mask": "0x4", 00:23:29.474 "workload": "verify", 00:23:29.474 "status": "finished", 00:23:29.474 "verify_range": { 00:23:29.474 "start": 0, 00:23:29.474 "length": 16384 00:23:29.474 }, 00:23:29.474 "queue_depth": 128, 00:23:29.474 "io_size": 4096, 00:23:29.474 "runtime": 10.009141, 00:23:29.474 "iops": 6111.913100235075, 00:23:29.474 "mibps": 23.874660547793262, 00:23:29.474 "io_failed": 41566, 00:23:29.474 "io_timeout": 0, 00:23:29.474 "avg_latency_us": 12446.598327904854, 00:23:29.474 "min_latency_us": 666.5309090909091, 00:23:29.474 "max_latency_us": 3019898.88 00:23:29.474 } 00:23:29.474 ], 00:23:29.474 "core_count": 1 00:23:29.474 } 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96666 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96666 ']' 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96666 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96666 00:23:29.474 killing process with pid 96666 00:23:29.474 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.474 00:23:29.474 Latency(us) 00:23:29.474 [2024-11-26T16:28:55.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.474 [2024-11-26T16:28:55.127Z] =================================================================================================================== 00:23:29.474 [2024-11-26T16:28:55.127Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96666' 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96666 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96666 00:23:29.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96903 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96903 /var/tmp/bdevperf.sock 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96903 ']' 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.474 16:28:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:29.474 [2024-11-26 16:28:54.936082] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:23:29.474 [2024-11-26 16:28:54.936800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96903 ] 00:23:29.474 [2024-11-26 16:28:55.082120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.474 [2024-11-26 16:28:55.101746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.732 [2024-11-26 16:28:55.130592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:29.732 16:28:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.732 16:28:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:29.732 16:28:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96903 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:29.732 16:28:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96910 00:23:29.732 16:28:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:29.991 16:28:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:30.250 NVMe0n1 00:23:30.250 16:28:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96955 00:23:30.250 16:28:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:30.250 16:28:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:30.250 Running I/O for 10 seconds... 00:23:31.187 16:28:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:31.449 17272.00 IOPS, 67.47 MiB/s [2024-11-26T16:28:57.102Z] [2024-11-26 16:28:57.005046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa860 is same with the state(6) to be set 00:23:31.449 [2024-11-26 16:28:57.005983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.449 [2024-11-26 16:28:57.006024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.449 [2024-11-26 16:28:57.006046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.449 [2024-11-26 16:28:57.006057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.450 [2024-11-26 16:28:57.006889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.450 [2024-11-26 16:28:57.006897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.006907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.006916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.006926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.006935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.006946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.006954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.006964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.006973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.006984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.006992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.451 [2024-11-26 16:28:57.007689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.451 [2024-11-26 16:28:57.007700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.007711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.007720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.007731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.007739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.007750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.007761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.007772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.007781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.007791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.007800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.007826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.007834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.007845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.007854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.007864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.007873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.007883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.007892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.007902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.007911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.007921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.007930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.007940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.007949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.007959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.007968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.007978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.007987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.007997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.452 [2024-11-26 16:28:57.008511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.452 [2024-11-26 16:28:57.008522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.453 [2024-11-26 16:28:57.008530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.453 [2024-11-26 16:28:57.008542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.453 [2024-11-26 16:28:57.008550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.453 [2024-11-26 16:28:57.008561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.453 [2024-11-26 16:28:57.008571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.453 [2024-11-26 16:28:57.008582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.453 [2024-11-26 16:28:57.008590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.453 [2024-11-26 16:28:57.008601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.453 [2024-11-26 16:28:57.008611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.453 [2024-11-26 16:28:57.008622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.453 [2024-11-26 16:28:57.008631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.453 [2024-11-26 16:28:57.008641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.453 [2024-11-26 16:28:57.008650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.453 [2024-11-26 16:28:57.008661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156bec0 is same with the state(6) to be set 00:23:31.453 [2024-11-26 16:28:57.008672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:31.453 [2024-11-26 16:28:57.008680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:31.453 [2024-11-26 16:28:57.008690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8952 len:8 PRP1 0x0 PRP2 0x0 00:23:31.453 [2024-11-26 16:28:57.008699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.453 [2024-11-26 16:28:57.009099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:31.453 [2024-11-26 16:28:57.009206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154a190 (9): Bad file descriptor 00:23:31.453 [2024-11-26 16:28:57.009311] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.453 [2024-11-26 16:28:57.009333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154a190 with addr=10.0.0.3, port=4420 00:23:31.453 [2024-11-26 16:28:57.009365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154a190 is same with the state(6) to be set 00:23:31.453 [2024-11-26 16:28:57.009386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154a190 (9): Bad file descriptor 00:23:31.453 [2024-11-26 16:28:57.009403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:31.453 [2024-11-26 16:28:57.009413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:31.453 [2024-11-26 16:28:57.009423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:31.453 [2024-11-26 16:28:57.009433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:31.453 [2024-11-26 16:28:57.009444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:31.453 16:28:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 96955 00:23:33.325 10002.50 IOPS, 39.07 MiB/s [2024-11-26T16:28:59.237Z] 6668.33 IOPS, 26.05 MiB/s [2024-11-26T16:28:59.237Z] [2024-11-26 16:28:59.009581] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.584 [2024-11-26 16:28:59.009660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154a190 with addr=10.0.0.3, port=4420 00:23:33.584 [2024-11-26 16:28:59.009675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154a190 is same with the state(6) to be set 00:23:33.584 [2024-11-26 16:28:59.009698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154a190 (9): Bad file descriptor 00:23:33.584 [2024-11-26 16:28:59.009715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:33.584 [2024-11-26 16:28:59.009724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:33.584 [2024-11-26 16:28:59.009735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:33.584 [2024-11-26 16:28:59.009745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:33.584 [2024-11-26 16:28:59.009755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:35.454 5001.25 IOPS, 19.54 MiB/s [2024-11-26T16:29:01.107Z] 4001.00 IOPS, 15.63 MiB/s [2024-11-26T16:29:01.107Z] [2024-11-26 16:29:01.009884] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.454 [2024-11-26 16:29:01.009962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x154a190 with addr=10.0.0.3, port=4420 00:23:35.454 [2024-11-26 16:29:01.009976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154a190 is same with the state(6) to be set 00:23:35.454 [2024-11-26 16:29:01.009996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154a190 (9): Bad file descriptor 00:23:35.454 [2024-11-26 16:29:01.010013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:35.454 [2024-11-26 16:29:01.010022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:35.454 [2024-11-26 16:29:01.010032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:35.454 [2024-11-26 16:29:01.010043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:35.454 [2024-11-26 16:29:01.010053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:37.326 3334.17 IOPS, 13.02 MiB/s [2024-11-26T16:29:03.238Z] 2857.86 IOPS, 11.16 MiB/s [2024-11-26T16:29:03.238Z] [2024-11-26 16:29:03.010138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:37.585 [2024-11-26 16:29:03.010193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:37.585 [2024-11-26 16:29:03.010221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:37.585 [2024-11-26 16:29:03.010230] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:23:37.585 [2024-11-26 16:29:03.010241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:38.521 2500.62 IOPS, 9.77 MiB/s 00:23:38.521 Latency(us) 00:23:38.521 [2024-11-26T16:29:04.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.522 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:38.522 NVMe0n1 : 8.17 2449.92 9.57 15.68 0.00 51859.02 6970.65 7015926.69 00:23:38.522 [2024-11-26T16:29:04.175Z] =================================================================================================================== 00:23:38.522 [2024-11-26T16:29:04.175Z] Total : 2449.92 9.57 15.68 0.00 51859.02 6970.65 7015926.69 00:23:38.522 { 00:23:38.522 "results": [ 00:23:38.522 { 00:23:38.522 "job": "NVMe0n1", 00:23:38.522 "core_mask": "0x4", 00:23:38.522 "workload": "randread", 00:23:38.522 "status": "finished", 00:23:38.522 "queue_depth": 128, 00:23:38.522 "io_size": 4096, 00:23:38.522 "runtime": 8.165564, 00:23:38.522 "iops": 2449.9226262876637, 00:23:38.522 "mibps": 9.570010258936186, 00:23:38.522 "io_failed": 128, 00:23:38.522 "io_timeout": 0, 00:23:38.522 "avg_latency_us": 51859.0172386358, 00:23:38.522 "min_latency_us": 6970.647272727273, 00:23:38.522 "max_latency_us": 7015926.69090909 00:23:38.522 } 00:23:38.522 ], 00:23:38.522 "core_count": 1 00:23:38.522 } 00:23:38.522 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:38.522 Attaching 5 probes... 00:23:38.522 1338.868647: reset bdev controller NVMe0 00:23:38.522 1339.029885: reconnect bdev controller NVMe0 00:23:38.522 3339.248634: reconnect delay bdev controller NVMe0 00:23:38.522 3339.281916: reconnect bdev controller NVMe0 00:23:38.522 5339.559402: reconnect delay bdev controller NVMe0 00:23:38.522 5339.592205: reconnect bdev controller NVMe0 00:23:38.522 7339.881554: reconnect delay bdev controller NVMe0 00:23:38.522 7339.914571: reconnect bdev controller NVMe0 00:23:38.522 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:38.522 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:38.522 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 96910 00:23:38.522 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:38.522 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96903 00:23:38.522 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96903 ']' 00:23:38.522 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96903 00:23:38.522 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:38.522 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.522 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96903 00:23:38.522 killing process with pid 96903 00:23:38.522 Received shutdown signal, test time was about 8.232840 seconds 00:23:38.522 00:23:38.522 Latency(us) 00:23:38.522 [2024-11-26T16:29:04.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.522 [2024-11-26T16:29:04.175Z] =================================================================================================================== 00:23:38.522 [2024-11-26T16:29:04.175Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:38.522 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:38.522 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:38.522 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96903' 00:23:38.522 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96903 00:23:38.522 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96903 00:23:38.780 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:38.780 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:38.780 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:38.780 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:38.780 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:39.039 rmmod nvme_tcp 00:23:39.039 rmmod nvme_fabrics 00:23:39.039 rmmod nvme_keyring 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 96491 ']' 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 96491 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96491 ']' 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96491 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96491 00:23:39.039 killing process with pid 96491 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96491' 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96491 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96491 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:39.039 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:23:39.299 ************************************ 00:23:39.299 END TEST nvmf_timeout 00:23:39.299 ************************************ 00:23:39.299 00:23:39.299 real 0m44.859s 00:23:39.299 user 2m11.702s 00:23:39.299 sys 0m5.123s 00:23:39.299 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.300 16:29:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:39.300 16:29:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:39.300 16:29:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:39.300 00:23:39.300 real 5m41.492s 00:23:39.300 user 16m5.917s 00:23:39.300 sys 1m14.740s 00:23:39.300 16:29:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.300 16:29:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.300 ************************************ 00:23:39.300 END TEST nvmf_host 00:23:39.300 ************************************ 00:23:39.559 16:29:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:23:39.559 16:29:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:23:39.559 00:23:39.559 real 15m2.749s 00:23:39.559 user 39m40.294s 00:23:39.559 sys 3m57.966s 00:23:39.559 16:29:04 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.559 16:29:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.559 ************************************ 00:23:39.559 END TEST nvmf_tcp 00:23:39.559 ************************************ 00:23:39.559 16:29:05 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:23:39.559 16:29:05 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:39.559 16:29:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:39.559 16:29:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.559 16:29:05 -- common/autotest_common.sh@10 -- # set +x 00:23:39.559 ************************************ 00:23:39.559 START TEST nvmf_dif 00:23:39.559 ************************************ 00:23:39.559 16:29:05 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:39.559 * Looking for test storage... 00:23:39.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:39.559 16:29:05 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:39.559 16:29:05 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:23:39.559 16:29:05 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:39.559 16:29:05 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:39.559 16:29:05 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:39.559 16:29:05 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.559 16:29:05 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.559 16:29:05 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.559 16:29:05 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.559 16:29:05 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.559 16:29:05 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:39.818 16:29:05 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:23:39.818 16:29:05 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.818 16:29:05 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:39.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.818 --rc genhtml_branch_coverage=1 00:23:39.818 --rc genhtml_function_coverage=1 00:23:39.818 --rc genhtml_legend=1 00:23:39.818 --rc geninfo_all_blocks=1 00:23:39.818 --rc geninfo_unexecuted_blocks=1 00:23:39.818 00:23:39.819 ' 00:23:39.819 16:29:05 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:39.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.819 --rc genhtml_branch_coverage=1 00:23:39.819 --rc genhtml_function_coverage=1 00:23:39.819 --rc genhtml_legend=1 00:23:39.819 --rc geninfo_all_blocks=1 00:23:39.819 --rc geninfo_unexecuted_blocks=1 00:23:39.819 00:23:39.819 ' 00:23:39.819 16:29:05 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:39.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.819 --rc genhtml_branch_coverage=1 00:23:39.819 --rc genhtml_function_coverage=1 00:23:39.819 --rc genhtml_legend=1 00:23:39.819 --rc geninfo_all_blocks=1 00:23:39.819 --rc geninfo_unexecuted_blocks=1 00:23:39.819 00:23:39.819 ' 00:23:39.819 16:29:05 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:39.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.819 --rc genhtml_branch_coverage=1 00:23:39.819 --rc genhtml_function_coverage=1 00:23:39.819 --rc genhtml_legend=1 00:23:39.819 --rc geninfo_all_blocks=1 00:23:39.819 --rc geninfo_unexecuted_blocks=1 00:23:39.819 00:23:39.819 ' 00:23:39.819 16:29:05 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:39.819 16:29:05 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:23:39.819 16:29:05 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.819 16:29:05 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.819 16:29:05 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.819 16:29:05 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.819 16:29:05 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.819 16:29:05 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.819 16:29:05 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:39.819 16:29:05 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:39.819 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:39.819 16:29:05 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:39.819 16:29:05 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:39.819 16:29:05 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:39.819 16:29:05 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:39.819 16:29:05 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.819 16:29:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:39.819 16:29:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:39.819 Cannot find device "nvmf_init_br" 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:39.819 Cannot find device "nvmf_init_br2" 00:23:39.819 16:29:05 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:39.820 Cannot find device "nvmf_tgt_br" 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@164 -- # true 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:39.820 Cannot find device "nvmf_tgt_br2" 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@165 -- # true 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:39.820 Cannot find device "nvmf_init_br" 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@166 -- # true 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:39.820 Cannot find device "nvmf_init_br2" 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@167 -- # true 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:39.820 Cannot find device "nvmf_tgt_br" 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@168 -- # true 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:39.820 Cannot find device "nvmf_tgt_br2" 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@169 -- # true 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:39.820 Cannot find device "nvmf_br" 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@170 -- # true 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:39.820 Cannot find device "nvmf_init_if" 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@171 -- # true 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:39.820 Cannot find device "nvmf_init_if2" 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@172 -- # true 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:39.820 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@173 -- # true 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:39.820 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@174 -- # true 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:39.820 16:29:05 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:40.079 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:40.079 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:23:40.079 00:23:40.079 --- 10.0.0.3 ping statistics --- 00:23:40.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.079 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:40.079 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:40.079 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:23:40.079 00:23:40.079 --- 10.0.0.4 ping statistics --- 00:23:40.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.079 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:23:40.079 16:29:05 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:40.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:23:40.080 00:23:40.080 --- 10.0.0.1 ping statistics --- 00:23:40.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.080 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:23:40.080 16:29:05 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:40.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:23:40.080 00:23:40.080 --- 10.0.0.2 ping statistics --- 00:23:40.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.080 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:40.080 16:29:05 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.080 16:29:05 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:23:40.080 16:29:05 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:23:40.080 16:29:05 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:40.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:40.339 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:40.339 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:40.598 16:29:06 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.598 16:29:06 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:40.598 16:29:06 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:40.598 16:29:06 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.598 16:29:06 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:40.598 16:29:06 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:40.598 16:29:06 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:40.598 16:29:06 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:40.598 16:29:06 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.598 16:29:06 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.598 16:29:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:40.598 16:29:06 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=97447 00:23:40.598 16:29:06 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:40.598 16:29:06 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 97447 00:23:40.598 16:29:06 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 97447 ']' 00:23:40.598 16:29:06 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.598 16:29:06 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.598 16:29:06 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.598 16:29:06 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.598 16:29:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:40.598 [2024-11-26 16:29:06.120132] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:23:40.598 [2024-11-26 16:29:06.120241] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.858 [2024-11-26 16:29:06.273087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.858 [2024-11-26 16:29:06.296428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.858 [2024-11-26 16:29:06.296499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.858 [2024-11-26 16:29:06.296513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.858 [2024-11-26 16:29:06.296523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.858 [2024-11-26 16:29:06.296532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.858 [2024-11-26 16:29:06.296943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.858 [2024-11-26 16:29:06.332702] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:40.858 16:29:06 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.858 16:29:06 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:23:40.858 16:29:06 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:40.858 16:29:06 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:40.858 16:29:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:40.858 16:29:06 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.858 16:29:06 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:40.858 16:29:06 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:40.858 16:29:06 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.858 16:29:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:40.858 [2024-11-26 16:29:06.428412] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.858 16:29:06 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.858 16:29:06 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:40.858 16:29:06 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:40.858 16:29:06 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:40.858 16:29:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:40.858 ************************************ 00:23:40.858 START TEST fio_dif_1_default 00:23:40.858 ************************************ 00:23:40.858 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:23:40.858 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:40.858 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:40.858 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:40.858 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:40.858 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:40.858 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:40.858 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.858 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:40.858 bdev_null0 00:23:40.858 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.858 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:40.859 [2024-11-26 16:29:06.472577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.859 { 00:23:40.859 "params": { 00:23:40.859 "name": "Nvme$subsystem", 00:23:40.859 "trtype": "$TEST_TRANSPORT", 00:23:40.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.859 "adrfam": "ipv4", 00:23:40.859 "trsvcid": "$NVMF_PORT", 00:23:40.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.859 "hdgst": ${hdgst:-false}, 00:23:40.859 "ddgst": ${ddgst:-false} 00:23:40.859 }, 00:23:40.859 "method": "bdev_nvme_attach_controller" 00:23:40.859 } 00:23:40.859 EOF 00:23:40.859 )") 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:23:40.859 16:29:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:40.859 "params": { 00:23:40.859 "name": "Nvme0", 00:23:40.859 "trtype": "tcp", 00:23:40.859 "traddr": "10.0.0.3", 00:23:40.859 "adrfam": "ipv4", 00:23:40.859 "trsvcid": "4420", 00:23:40.859 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.859 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:40.859 "hdgst": false, 00:23:40.859 "ddgst": false 00:23:40.859 }, 00:23:40.859 "method": "bdev_nvme_attach_controller" 00:23:40.859 }' 00:23:41.118 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:41.118 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:41.118 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:41.118 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:41.118 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:41.118 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:41.118 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:41.118 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:41.118 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:41.118 16:29:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:41.118 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:41.118 fio-3.35 00:23:41.118 Starting 1 thread 00:23:53.325 00:23:53.325 filename0: (groupid=0, jobs=1): err= 0: pid=97502: Tue Nov 26 16:29:17 2024 00:23:53.325 read: IOPS=10.2k, BW=39.9MiB/s (41.8MB/s)(399MiB/10001msec) 00:23:53.325 slat (nsec): min=5810, max=73159, avg=7388.53, stdev=3077.24 00:23:53.325 clat (usec): min=307, max=3417, avg=370.04, stdev=41.56 00:23:53.325 lat (usec): min=313, max=3445, avg=377.43, stdev=42.30 00:23:53.325 clat percentiles (usec): 00:23:53.325 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 338], 00:23:53.325 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 375], 00:23:53.325 | 70.00th=[ 383], 80.00th=[ 396], 90.00th=[ 416], 95.00th=[ 437], 00:23:53.325 | 99.00th=[ 490], 99.50th=[ 515], 99.90th=[ 562], 99.95th=[ 586], 00:23:53.325 | 99.99th=[ 644] 00:23:53.325 bw ( KiB/s): min=38080, max=41920, per=100.00%, avg=40912.84, stdev=930.17, samples=19 00:23:53.325 iops : min= 9520, max=10480, avg=10228.21, stdev=232.54, samples=19 00:23:53.325 lat (usec) : 500=99.25%, 750=0.74% 00:23:53.325 lat (msec) : 4=0.01% 00:23:53.325 cpu : usr=85.20%, sys=13.08%, ctx=32, majf=0, minf=4 00:23:53.325 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:53.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.325 issued rwts: total=102128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.325 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:53.325 00:23:53.325 Run status group 0 (all jobs): 00:23:53.325 READ: bw=39.9MiB/s (41.8MB/s), 39.9MiB/s-39.9MiB/s (41.8MB/s-41.8MB/s), io=399MiB (418MB), run=10001-10001msec 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.325 00:23:53.325 real 0m10.860s 00:23:53.325 user 0m9.071s 00:23:53.325 sys 0m1.539s 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.325 ************************************ 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:53.325 END TEST fio_dif_1_default 00:23:53.325 ************************************ 00:23:53.325 16:29:17 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:53.325 16:29:17 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:53.325 16:29:17 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.325 16:29:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:53.325 ************************************ 00:23:53.325 START TEST fio_dif_1_multi_subsystems 00:23:53.325 ************************************ 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:53.325 bdev_null0 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.325 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:53.326 [2024-11-26 16:29:17.386559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:53.326 bdev_null1 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:53.326 { 00:23:53.326 "params": { 00:23:53.326 "name": "Nvme$subsystem", 00:23:53.326 "trtype": "$TEST_TRANSPORT", 00:23:53.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.326 "adrfam": "ipv4", 00:23:53.326 "trsvcid": "$NVMF_PORT", 00:23:53.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.326 "hdgst": ${hdgst:-false}, 00:23:53.326 "ddgst": ${ddgst:-false} 00:23:53.326 }, 00:23:53.326 "method": "bdev_nvme_attach_controller" 00:23:53.326 } 00:23:53.326 EOF 00:23:53.326 )") 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:53.326 { 00:23:53.326 "params": { 00:23:53.326 "name": "Nvme$subsystem", 00:23:53.326 "trtype": "$TEST_TRANSPORT", 00:23:53.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.326 "adrfam": "ipv4", 00:23:53.326 "trsvcid": "$NVMF_PORT", 00:23:53.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.326 "hdgst": ${hdgst:-false}, 00:23:53.326 "ddgst": ${ddgst:-false} 00:23:53.326 }, 00:23:53.326 "method": "bdev_nvme_attach_controller" 00:23:53.326 } 00:23:53.326 EOF 00:23:53.326 )") 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:23:53.326 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:53.326 "params": { 00:23:53.326 "name": "Nvme0", 00:23:53.326 "trtype": "tcp", 00:23:53.326 "traddr": "10.0.0.3", 00:23:53.326 "adrfam": "ipv4", 00:23:53.326 "trsvcid": "4420", 00:23:53.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:53.326 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:53.327 "hdgst": false, 00:23:53.327 "ddgst": false 00:23:53.327 }, 00:23:53.327 "method": "bdev_nvme_attach_controller" 00:23:53.327 },{ 00:23:53.327 "params": { 00:23:53.327 "name": "Nvme1", 00:23:53.327 "trtype": "tcp", 00:23:53.327 "traddr": "10.0.0.3", 00:23:53.327 "adrfam": "ipv4", 00:23:53.327 "trsvcid": "4420", 00:23:53.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.327 "hdgst": false, 00:23:53.327 "ddgst": false 00:23:53.327 }, 00:23:53.327 "method": "bdev_nvme_attach_controller" 00:23:53.327 }' 00:23:53.327 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:53.327 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:53.327 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.327 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:53.327 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:53.327 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:53.327 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:53.327 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:53.327 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:53.327 16:29:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:53.327 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:53.327 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:53.327 fio-3.35 00:23:53.327 Starting 2 threads 00:24:03.371 00:24:03.371 filename0: (groupid=0, jobs=1): err= 0: pid=97662: Tue Nov 26 16:29:28 2024 00:24:03.371 read: IOPS=5437, BW=21.2MiB/s (22.3MB/s)(212MiB/10001msec) 00:24:03.371 slat (nsec): min=5907, max=91427, avg=12080.53, stdev=4245.24 00:24:03.371 clat (usec): min=325, max=7662, avg=702.57, stdev=79.81 00:24:03.371 lat (usec): min=331, max=7688, avg=714.65, stdev=80.25 00:24:03.371 clat percentiles (usec): 00:24:03.371 | 1.00th=[ 619], 5.00th=[ 635], 10.00th=[ 644], 20.00th=[ 660], 00:24:03.371 | 30.00th=[ 668], 40.00th=[ 685], 50.00th=[ 693], 60.00th=[ 701], 00:24:03.371 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 766], 95.00th=[ 799], 00:24:03.371 | 99.00th=[ 873], 99.50th=[ 906], 99.90th=[ 988], 99.95th=[ 1029], 00:24:03.371 | 99.99th=[ 1336] 00:24:03.371 bw ( KiB/s): min=21152, max=22048, per=50.05%, avg=21766.74, stdev=217.71, samples=19 00:24:03.371 iops : min= 5288, max= 5512, avg=5441.68, stdev=54.43, samples=19 00:24:03.371 lat (usec) : 500=0.04%, 750=84.16%, 1000=15.71% 00:24:03.371 lat (msec) : 2=0.08%, 10=0.01% 00:24:03.371 cpu : usr=90.38%, sys=8.22%, ctx=163, majf=0, minf=9 00:24:03.371 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.371 issued rwts: total=54376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.371 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:03.371 filename1: (groupid=0, jobs=1): err= 0: pid=97663: Tue Nov 26 16:29:28 2024 00:24:03.371 read: IOPS=5435, BW=21.2MiB/s (22.3MB/s)(212MiB/10001msec) 00:24:03.371 slat (usec): min=2, max=253, avg=11.96, stdev= 4.39 00:24:03.371 clat (usec): min=522, max=9390, avg=703.70, stdev=94.40 00:24:03.371 lat (usec): min=530, max=9401, avg=715.66, stdev=94.97 00:24:03.371 clat percentiles (usec): 00:24:03.371 | 1.00th=[ 594], 5.00th=[ 619], 10.00th=[ 635], 20.00th=[ 660], 00:24:03.371 | 30.00th=[ 676], 40.00th=[ 685], 50.00th=[ 693], 60.00th=[ 709], 00:24:03.371 | 70.00th=[ 725], 80.00th=[ 742], 90.00th=[ 775], 95.00th=[ 807], 00:24:03.371 | 99.00th=[ 881], 99.50th=[ 914], 99.90th=[ 1004], 99.95th=[ 1074], 00:24:03.371 | 99.99th=[ 1303] 00:24:03.371 bw ( KiB/s): min=21152, max=22048, per=50.04%, avg=21761.68, stdev=222.59, samples=19 00:24:03.371 iops : min= 5288, max= 5512, avg=5440.42, stdev=55.65, samples=19 00:24:03.371 lat (usec) : 750=82.30%, 1000=17.59% 00:24:03.371 lat (msec) : 2=0.10%, 10=0.01% 00:24:03.371 cpu : usr=89.57%, sys=9.02%, ctx=21, majf=0, minf=9 00:24:03.371 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.371 issued rwts: total=54356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.371 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:03.371 00:24:03.371 Run status group 0 (all jobs): 00:24:03.371 READ: bw=42.5MiB/s (44.5MB/s), 21.2MiB/s-21.2MiB/s (22.3MB/s-22.3MB/s), io=425MiB (445MB), run=10001-10001msec 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:03.371 ************************************ 00:24:03.371 END TEST fio_dif_1_multi_subsystems 00:24:03.371 ************************************ 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.371 00:24:03.371 real 0m10.944s 00:24:03.371 user 0m18.625s 00:24:03.371 sys 0m1.932s 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:03.371 16:29:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:03.371 16:29:28 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:24:03.371 16:29:28 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:03.371 16:29:28 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:03.371 16:29:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:03.371 ************************************ 00:24:03.371 START TEST fio_dif_rand_params 00:24:03.371 ************************************ 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.371 bdev_null0 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.371 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.372 [2024-11-26 16:29:28.387596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.372 { 00:24:03.372 "params": { 00:24:03.372 "name": "Nvme$subsystem", 00:24:03.372 "trtype": "$TEST_TRANSPORT", 00:24:03.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.372 "adrfam": "ipv4", 00:24:03.372 "trsvcid": "$NVMF_PORT", 00:24:03.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.372 "hdgst": ${hdgst:-false}, 00:24:03.372 "ddgst": ${ddgst:-false} 00:24:03.372 }, 00:24:03.372 "method": "bdev_nvme_attach_controller" 00:24:03.372 } 00:24:03.372 EOF 00:24:03.372 )") 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:03.372 "params": { 00:24:03.372 "name": "Nvme0", 00:24:03.372 "trtype": "tcp", 00:24:03.372 "traddr": "10.0.0.3", 00:24:03.372 "adrfam": "ipv4", 00:24:03.372 "trsvcid": "4420", 00:24:03.372 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:03.372 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:03.372 "hdgst": false, 00:24:03.372 "ddgst": false 00:24:03.372 }, 00:24:03.372 "method": "bdev_nvme_attach_controller" 00:24:03.372 }' 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:03.372 16:29:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:03.372 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:03.372 ... 00:24:03.372 fio-3.35 00:24:03.372 Starting 3 threads 00:24:08.639 00:24:08.639 filename0: (groupid=0, jobs=1): err= 0: pid=97817: Tue Nov 26 16:29:34 2024 00:24:08.639 read: IOPS=284, BW=35.5MiB/s (37.3MB/s)(178MiB/5003msec) 00:24:08.639 slat (nsec): min=6474, max=33233, avg=9226.03, stdev=3659.75 00:24:08.639 clat (usec): min=4681, max=12106, avg=10531.62, stdev=442.69 00:24:08.639 lat (usec): min=4688, max=12119, avg=10540.85, stdev=442.69 00:24:08.639 clat percentiles (usec): 00:24:08.639 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10290], 20.00th=[10290], 00:24:08.639 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10421], 00:24:08.639 | 70.00th=[10552], 80.00th=[10814], 90.00th=[10945], 95.00th=[11338], 00:24:08.639 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12125], 99.95th=[12125], 00:24:08.639 | 99.99th=[12125] 00:24:08.639 bw ( KiB/s): min=36096, max=36864, per=33.31%, avg=36326.40, stdev=370.98, samples=10 00:24:08.639 iops : min= 282, max= 288, avg=283.80, stdev= 2.90, samples=10 00:24:08.639 lat (msec) : 10=0.21%, 20=99.79% 00:24:08.639 cpu : usr=91.28%, sys=8.14%, ctx=7, majf=0, minf=3 00:24:08.639 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:08.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.640 issued rwts: total=1422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.640 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:08.640 filename0: (groupid=0, jobs=1): err= 0: pid=97819: Tue Nov 26 16:29:34 2024 00:24:08.640 read: IOPS=284, BW=35.5MiB/s (37.2MB/s)(178MiB/5007msec) 00:24:08.640 slat (nsec): min=6878, max=52507, avg=13341.96, stdev=4021.16 00:24:08.640 clat (usec): min=8322, max=12440, avg=10533.36, stdev=373.21 00:24:08.640 lat (usec): min=8337, max=12452, avg=10546.70, stdev=373.33 00:24:08.640 clat percentiles (usec): 00:24:08.640 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10290], 00:24:08.640 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10421], 00:24:08.640 | 70.00th=[10552], 80.00th=[10814], 90.00th=[10945], 95.00th=[11207], 00:24:08.640 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12387], 99.95th=[12387], 00:24:08.640 | 99.99th=[12387] 00:24:08.640 bw ( KiB/s): min=35328, max=36864, per=33.31%, avg=36326.40, stdev=518.36, samples=10 00:24:08.640 iops : min= 276, max= 288, avg=283.80, stdev= 4.05, samples=10 00:24:08.640 lat (msec) : 10=0.42%, 20=99.58% 00:24:08.640 cpu : usr=91.05%, sys=8.39%, ctx=8, majf=0, minf=0 00:24:08.640 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:08.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.640 issued rwts: total=1422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.640 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:08.640 filename0: (groupid=0, jobs=1): err= 0: pid=97820: Tue Nov 26 16:29:34 2024 00:24:08.640 read: IOPS=284, BW=35.5MiB/s (37.2MB/s)(178MiB/5007msec) 00:24:08.640 slat (nsec): min=6828, max=45442, avg=13845.11, stdev=4283.06 00:24:08.640 clat (usec): min=8312, max=12423, avg=10531.38, stdev=372.07 00:24:08.640 lat (usec): min=8325, max=12443, avg=10545.22, stdev=372.21 00:24:08.640 clat percentiles (usec): 00:24:08.640 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10290], 00:24:08.640 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10421], 00:24:08.640 | 70.00th=[10552], 80.00th=[10814], 90.00th=[10945], 95.00th=[11207], 00:24:08.640 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12387], 99.95th=[12387], 00:24:08.640 | 99.99th=[12387] 00:24:08.640 bw ( KiB/s): min=35328, max=36864, per=33.31%, avg=36326.40, stdev=518.36, samples=10 00:24:08.640 iops : min= 276, max= 288, avg=283.80, stdev= 4.05, samples=10 00:24:08.640 lat (msec) : 10=0.42%, 20=99.58% 00:24:08.640 cpu : usr=92.17%, sys=7.25%, ctx=50, majf=0, minf=0 00:24:08.640 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:08.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.640 issued rwts: total=1422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.640 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:08.640 00:24:08.640 Run status group 0 (all jobs): 00:24:08.640 READ: bw=107MiB/s (112MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.3MB/s), io=533MiB (559MB), run=5003-5007msec 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.640 bdev_null0 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.640 [2024-11-26 16:29:34.277222] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.640 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.900 bdev_null1 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.900 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.900 bdev_null2 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:08.901 { 00:24:08.901 "params": { 00:24:08.901 "name": "Nvme$subsystem", 00:24:08.901 "trtype": "$TEST_TRANSPORT", 00:24:08.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:08.901 "adrfam": "ipv4", 00:24:08.901 "trsvcid": "$NVMF_PORT", 00:24:08.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:08.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:08.901 "hdgst": ${hdgst:-false}, 00:24:08.901 "ddgst": ${ddgst:-false} 00:24:08.901 }, 00:24:08.901 "method": "bdev_nvme_attach_controller" 00:24:08.901 } 00:24:08.901 EOF 00:24:08.901 )") 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:08.901 { 00:24:08.901 "params": { 00:24:08.901 "name": "Nvme$subsystem", 00:24:08.901 "trtype": "$TEST_TRANSPORT", 00:24:08.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:08.901 "adrfam": "ipv4", 00:24:08.901 "trsvcid": "$NVMF_PORT", 00:24:08.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:08.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:08.901 "hdgst": ${hdgst:-false}, 00:24:08.901 "ddgst": ${ddgst:-false} 00:24:08.901 }, 00:24:08.901 "method": "bdev_nvme_attach_controller" 00:24:08.901 } 00:24:08.901 EOF 00:24:08.901 )") 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:08.901 { 00:24:08.901 "params": { 00:24:08.901 "name": "Nvme$subsystem", 00:24:08.901 "trtype": "$TEST_TRANSPORT", 00:24:08.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:08.901 "adrfam": "ipv4", 00:24:08.901 "trsvcid": "$NVMF_PORT", 00:24:08.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:08.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:08.901 "hdgst": ${hdgst:-false}, 00:24:08.901 "ddgst": ${ddgst:-false} 00:24:08.901 }, 00:24:08.901 "method": "bdev_nvme_attach_controller" 00:24:08.901 } 00:24:08.901 EOF 00:24:08.901 )") 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:08.901 "params": { 00:24:08.901 "name": "Nvme0", 00:24:08.901 "trtype": "tcp", 00:24:08.901 "traddr": "10.0.0.3", 00:24:08.901 "adrfam": "ipv4", 00:24:08.901 "trsvcid": "4420", 00:24:08.901 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:08.901 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:08.901 "hdgst": false, 00:24:08.901 "ddgst": false 00:24:08.901 }, 00:24:08.901 "method": "bdev_nvme_attach_controller" 00:24:08.901 },{ 00:24:08.901 "params": { 00:24:08.901 "name": "Nvme1", 00:24:08.901 "trtype": "tcp", 00:24:08.901 "traddr": "10.0.0.3", 00:24:08.901 "adrfam": "ipv4", 00:24:08.901 "trsvcid": "4420", 00:24:08.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:08.901 "hdgst": false, 00:24:08.901 "ddgst": false 00:24:08.901 }, 00:24:08.901 "method": "bdev_nvme_attach_controller" 00:24:08.901 },{ 00:24:08.901 "params": { 00:24:08.901 "name": "Nvme2", 00:24:08.901 "trtype": "tcp", 00:24:08.901 "traddr": "10.0.0.3", 00:24:08.901 "adrfam": "ipv4", 00:24:08.901 "trsvcid": "4420", 00:24:08.901 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:08.901 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:08.901 "hdgst": false, 00:24:08.901 "ddgst": false 00:24:08.901 }, 00:24:08.901 "method": "bdev_nvme_attach_controller" 00:24:08.901 }' 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:08.901 16:29:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:09.161 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:09.161 ... 00:24:09.161 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:09.161 ... 00:24:09.161 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:09.161 ... 00:24:09.161 fio-3.35 00:24:09.161 Starting 24 threads 00:24:21.373 00:24:21.373 filename0: (groupid=0, jobs=1): err= 0: pid=97910: Tue Nov 26 16:29:45 2024 00:24:21.373 read: IOPS=222, BW=890KiB/s (911kB/s)(8948KiB/10056msec) 00:24:21.373 slat (usec): min=8, max=8046, avg=27.14, stdev=247.27 00:24:21.373 clat (msec): min=14, max=142, avg=71.72, stdev=21.81 00:24:21.373 lat (msec): min=14, max=142, avg=71.74, stdev=21.81 00:24:21.373 clat percentiles (msec): 00:24:21.373 | 1.00th=[ 18], 5.00th=[ 34], 10.00th=[ 47], 20.00th=[ 52], 00:24:21.373 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 78], 00:24:21.373 | 70.00th=[ 81], 80.00th=[ 89], 90.00th=[ 99], 95.00th=[ 107], 00:24:21.373 | 99.00th=[ 121], 99.50th=[ 136], 99.90th=[ 138], 99.95th=[ 142], 00:24:21.373 | 99.99th=[ 142] 00:24:21.373 bw ( KiB/s): min= 528, max= 1664, per=4.08%, avg=887.75, stdev=221.23, samples=20 00:24:21.373 iops : min= 132, max= 416, avg=221.90, stdev=55.33, samples=20 00:24:21.373 lat (msec) : 20=1.34%, 50=15.56%, 100=74.25%, 250=8.85% 00:24:21.373 cpu : usr=40.99%, sys=1.98%, ctx=1313, majf=0, minf=9 00:24:21.373 IO depths : 1=0.1%, 2=2.0%, 4=8.0%, 8=74.8%, 16=15.1%, 32=0.0%, >=64=0.0% 00:24:21.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.373 complete : 0=0.0%, 4=89.5%, 8=8.8%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.373 issued rwts: total=2237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.373 filename0: (groupid=0, jobs=1): err= 0: pid=97911: Tue Nov 26 16:29:45 2024 00:24:21.373 read: IOPS=222, BW=890KiB/s (911kB/s)(8936KiB/10042msec) 00:24:21.373 slat (usec): min=3, max=8035, avg=33.25, stdev=379.83 00:24:21.373 clat (msec): min=7, max=144, avg=71.60, stdev=24.26 00:24:21.373 lat (msec): min=7, max=144, avg=71.63, stdev=24.27 00:24:21.373 clat percentiles (msec): 00:24:21.373 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 47], 20.00th=[ 50], 00:24:21.373 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:24:21.373 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 103], 95.00th=[ 109], 00:24:21.373 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:24:21.373 | 99.99th=[ 146] 00:24:21.373 bw ( KiB/s): min= 528, max= 1907, per=4.09%, avg=889.75, stdev=268.73, samples=20 00:24:21.373 iops : min= 132, max= 476, avg=222.40, stdev=67.03, samples=20 00:24:21.373 lat (msec) : 10=1.43%, 20=2.06%, 50=17.23%, 100=67.59%, 250=11.68% 00:24:21.373 cpu : usr=34.28%, sys=2.00%, ctx=996, majf=0, minf=9 00:24:21.373 IO depths : 1=0.1%, 2=1.7%, 4=6.4%, 8=76.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:21.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.373 complete : 0=0.0%, 4=89.3%, 8=9.3%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.373 issued rwts: total=2234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.373 filename0: (groupid=0, jobs=1): err= 0: pid=97912: Tue Nov 26 16:29:45 2024 00:24:21.373 read: IOPS=209, BW=838KiB/s (858kB/s)(8416KiB/10041msec) 00:24:21.373 slat (nsec): min=4216, max=44381, avg=12184.34, stdev=4127.99 00:24:21.373 clat (msec): min=13, max=157, avg=76.18, stdev=21.89 00:24:21.373 lat (msec): min=13, max=157, avg=76.19, stdev=21.89 00:24:21.373 clat percentiles (msec): 00:24:21.373 | 1.00th=[ 15], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:24:21.373 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 82], 00:24:21.373 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 109], 00:24:21.373 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 159], 00:24:21.373 | 99.99th=[ 159] 00:24:21.373 bw ( KiB/s): min= 528, max= 1434, per=3.84%, avg=836.95, stdev=179.59, samples=20 00:24:21.373 iops : min= 132, max= 358, avg=209.20, stdev=44.81, samples=20 00:24:21.373 lat (msec) : 20=2.28%, 50=12.07%, 100=74.86%, 250=10.79% 00:24:21.373 cpu : usr=31.45%, sys=1.51%, ctx=847, majf=0, minf=9 00:24:21.373 IO depths : 1=0.2%, 2=2.4%, 4=9.3%, 8=72.7%, 16=15.4%, 32=0.0%, >=64=0.0% 00:24:21.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.373 complete : 0=0.0%, 4=90.2%, 8=7.7%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.373 issued rwts: total=2104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.373 filename0: (groupid=0, jobs=1): err= 0: pid=97913: Tue Nov 26 16:29:45 2024 00:24:21.373 read: IOPS=220, BW=881KiB/s (902kB/s)(8832KiB/10027msec) 00:24:21.373 slat (usec): min=4, max=8031, avg=23.54, stdev=247.80 00:24:21.373 clat (msec): min=13, max=125, avg=72.45, stdev=20.06 00:24:21.373 lat (msec): min=13, max=125, avg=72.47, stdev=20.07 00:24:21.373 clat percentiles (msec): 00:24:21.373 | 1.00th=[ 28], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 52], 00:24:21.373 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 78], 00:24:21.373 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 99], 95.00th=[ 106], 00:24:21.373 | 99.00th=[ 112], 99.50th=[ 121], 99.90th=[ 126], 99.95th=[ 127], 00:24:21.373 | 99.99th=[ 127] 00:24:21.373 bw ( KiB/s): min= 634, max= 1322, per=4.04%, avg=879.00, stdev=153.76, samples=20 00:24:21.373 iops : min= 158, max= 330, avg=219.70, stdev=38.41, samples=20 00:24:21.373 lat (msec) : 20=0.23%, 50=17.57%, 100=73.46%, 250=8.74% 00:24:21.373 cpu : usr=41.39%, sys=2.15%, ctx=1441, majf=0, minf=9 00:24:21.373 IO depths : 1=0.1%, 2=1.8%, 4=7.3%, 8=75.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:24:21.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.373 complete : 0=0.0%, 4=89.3%, 8=9.1%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.373 issued rwts: total=2208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.373 filename0: (groupid=0, jobs=1): err= 0: pid=97914: Tue Nov 26 16:29:45 2024 00:24:21.373 read: IOPS=235, BW=942KiB/s (964kB/s)(9464KiB/10052msec) 00:24:21.373 slat (usec): min=3, max=4024, avg=18.52, stdev=142.83 00:24:21.373 clat (msec): min=2, max=168, avg=67.79, stdev=23.93 00:24:21.373 lat (msec): min=2, max=168, avg=67.81, stdev=23.93 00:24:21.373 clat percentiles (msec): 00:24:21.373 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 37], 20.00th=[ 50], 00:24:21.373 | 30.00th=[ 56], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 75], 00:24:21.373 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 107], 00:24:21.373 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 169], 00:24:21.373 | 99.99th=[ 169] 00:24:21.373 bw ( KiB/s): min= 656, max= 2176, per=4.32%, avg=940.00, stdev=303.89, samples=20 00:24:21.373 iops : min= 164, max= 544, avg=235.00, stdev=75.97, samples=20 00:24:21.373 lat (msec) : 4=1.44%, 10=2.70%, 20=1.86%, 50=15.30%, 100=71.05% 00:24:21.373 lat (msec) : 250=7.65% 00:24:21.373 cpu : usr=41.11%, sys=2.36%, ctx=1205, majf=0, minf=0 00:24:21.373 IO depths : 1=0.3%, 2=1.2%, 4=3.8%, 8=79.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:21.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.373 complete : 0=0.0%, 4=88.3%, 8=10.8%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.373 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.373 filename0: (groupid=0, jobs=1): err= 0: pid=97915: Tue Nov 26 16:29:45 2024 00:24:21.373 read: IOPS=228, BW=914KiB/s (936kB/s)(9156KiB/10015msec) 00:24:21.373 slat (usec): min=4, max=8027, avg=27.37, stdev=247.22 00:24:21.373 clat (msec): min=22, max=126, avg=69.84, stdev=20.21 00:24:21.373 lat (msec): min=22, max=126, avg=69.87, stdev=20.21 00:24:21.373 clat percentiles (msec): 00:24:21.373 | 1.00th=[ 26], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 51], 00:24:21.373 | 30.00th=[ 55], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 77], 00:24:21.373 | 70.00th=[ 80], 80.00th=[ 87], 90.00th=[ 99], 95.00th=[ 105], 00:24:21.373 | 99.00th=[ 112], 99.50th=[ 117], 99.90th=[ 121], 99.95th=[ 121], 00:24:21.373 | 99.99th=[ 127] 00:24:21.373 bw ( KiB/s): min= 637, max= 1392, per=4.18%, avg=909.05, stdev=176.71, samples=20 00:24:21.373 iops : min= 159, max= 348, avg=227.25, stdev=44.20, samples=20 00:24:21.373 lat (msec) : 50=19.53%, 100=72.48%, 250=7.99% 00:24:21.373 cpu : usr=41.94%, sys=2.64%, ctx=1539, majf=0, minf=9 00:24:21.373 IO depths : 1=0.1%, 2=1.8%, 4=7.3%, 8=76.1%, 16=14.8%, 32=0.0%, >=64=0.0% 00:24:21.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.373 complete : 0=0.0%, 4=88.8%, 8=9.6%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.373 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.373 filename0: (groupid=0, jobs=1): err= 0: pid=97916: Tue Nov 26 16:29:45 2024 00:24:21.373 read: IOPS=232, BW=931KiB/s (954kB/s)(9336KiB/10025msec) 00:24:21.373 slat (usec): min=5, max=8028, avg=24.24, stdev=287.09 00:24:21.373 clat (msec): min=10, max=131, avg=68.60, stdev=20.46 00:24:21.373 lat (msec): min=10, max=131, avg=68.62, stdev=20.46 00:24:21.373 clat percentiles (msec): 00:24:21.373 | 1.00th=[ 22], 5.00th=[ 35], 10.00th=[ 47], 20.00th=[ 48], 00:24:21.374 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 73], 00:24:21.374 | 70.00th=[ 82], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 105], 00:24:21.374 | 99.00th=[ 110], 99.50th=[ 120], 99.90th=[ 122], 99.95th=[ 124], 00:24:21.374 | 99.99th=[ 132] 00:24:21.374 bw ( KiB/s): min= 766, max= 1595, per=4.26%, avg=927.25, stdev=175.85, samples=20 00:24:21.374 iops : min= 191, max= 398, avg=231.75, stdev=43.84, samples=20 00:24:21.374 lat (msec) : 20=0.81%, 50=23.35%, 100=69.84%, 250=6.00% 00:24:21.374 cpu : usr=31.69%, sys=1.93%, ctx=914, majf=0, minf=9 00:24:21.374 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:24:21.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.374 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.374 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.374 filename0: (groupid=0, jobs=1): err= 0: pid=97917: Tue Nov 26 16:29:45 2024 00:24:21.374 read: IOPS=231, BW=924KiB/s (946kB/s)(9280KiB/10042msec) 00:24:21.374 slat (usec): min=3, max=7639, avg=26.02, stdev=244.13 00:24:21.374 clat (msec): min=3, max=136, avg=69.04, stdev=22.47 00:24:21.374 lat (msec): min=3, max=136, avg=69.07, stdev=22.47 00:24:21.374 clat percentiles (msec): 00:24:21.374 | 1.00th=[ 11], 5.00th=[ 27], 10.00th=[ 42], 20.00th=[ 50], 00:24:21.374 | 30.00th=[ 57], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:24:21.374 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 106], 00:24:21.374 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 129], 00:24:21.374 | 99.99th=[ 136] 00:24:21.374 bw ( KiB/s): min= 680, max= 2007, per=4.24%, avg=923.15, stdev=270.64, samples=20 00:24:21.374 iops : min= 170, max= 501, avg=230.75, stdev=67.50, samples=20 00:24:21.374 lat (msec) : 4=0.09%, 10=0.60%, 20=2.97%, 50=17.24%, 100=71.77% 00:24:21.374 lat (msec) : 250=7.33% 00:24:21.374 cpu : usr=41.22%, sys=2.26%, ctx=1188, majf=0, minf=0 00:24:21.374 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=81.6%, 16=16.6%, 32=0.0%, >=64=0.0% 00:24:21.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.374 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.374 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.374 filename1: (groupid=0, jobs=1): err= 0: pid=97918: Tue Nov 26 16:29:45 2024 00:24:21.374 read: IOPS=235, BW=943KiB/s (965kB/s)(9452KiB/10026msec) 00:24:21.374 slat (usec): min=3, max=8023, avg=19.53, stdev=180.65 00:24:21.374 clat (msec): min=24, max=127, avg=67.72, stdev=20.49 00:24:21.374 lat (msec): min=24, max=127, avg=67.74, stdev=20.50 00:24:21.374 clat percentiles (msec): 00:24:21.374 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 46], 20.00th=[ 48], 00:24:21.374 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:24:21.374 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:24:21.374 | 99.00th=[ 121], 99.50th=[ 128], 99.90th=[ 129], 99.95th=[ 129], 00:24:21.374 | 99.99th=[ 129] 00:24:21.374 bw ( KiB/s): min= 686, max= 1408, per=4.33%, avg=941.50, stdev=150.41, samples=20 00:24:21.374 iops : min= 171, max= 352, avg=235.35, stdev=37.65, samples=20 00:24:21.374 lat (msec) : 50=27.13%, 100=66.82%, 250=6.05% 00:24:21.374 cpu : usr=34.18%, sys=2.01%, ctx=1101, majf=0, minf=9 00:24:21.374 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:21.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.374 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.374 issued rwts: total=2363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.374 filename1: (groupid=0, jobs=1): err= 0: pid=97919: Tue Nov 26 16:29:45 2024 00:24:21.374 read: IOPS=222, BW=891KiB/s (912kB/s)(8924KiB/10020msec) 00:24:21.374 slat (usec): min=4, max=8024, avg=24.00, stdev=240.80 00:24:21.374 clat (msec): min=22, max=146, avg=71.68, stdev=20.42 00:24:21.374 lat (msec): min=22, max=146, avg=71.70, stdev=20.42 00:24:21.374 clat percentiles (msec): 00:24:21.374 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:24:21.374 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 75], 00:24:21.374 | 70.00th=[ 80], 80.00th=[ 90], 90.00th=[ 100], 95.00th=[ 108], 00:24:21.374 | 99.00th=[ 121], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:24:21.374 | 99.99th=[ 146] 00:24:21.374 bw ( KiB/s): min= 528, max= 1168, per=4.07%, avg=885.30, stdev=146.06, samples=20 00:24:21.374 iops : min= 132, max= 292, avg=221.30, stdev=36.51, samples=20 00:24:21.374 lat (msec) : 50=16.41%, 100=74.41%, 250=9.19% 00:24:21.374 cpu : usr=40.41%, sys=2.26%, ctx=1409, majf=0, minf=9 00:24:21.374 IO depths : 1=0.1%, 2=2.0%, 4=7.7%, 8=75.3%, 16=14.8%, 32=0.0%, >=64=0.0% 00:24:21.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.374 complete : 0=0.0%, 4=89.1%, 8=9.2%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.374 issued rwts: total=2231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.374 filename1: (groupid=0, jobs=1): err= 0: pid=97920: Tue Nov 26 16:29:45 2024 00:24:21.374 read: IOPS=227, BW=909KiB/s (931kB/s)(9108KiB/10016msec) 00:24:21.374 slat (usec): min=3, max=8030, avg=24.68, stdev=290.76 00:24:21.374 clat (msec): min=26, max=168, avg=70.25, stdev=21.32 00:24:21.374 lat (msec): min=26, max=168, avg=70.28, stdev=21.32 00:24:21.374 clat percentiles (msec): 00:24:21.374 | 1.00th=[ 31], 5.00th=[ 38], 10.00th=[ 48], 20.00th=[ 48], 00:24:21.374 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 73], 00:24:21.374 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 99], 95.00th=[ 108], 00:24:21.374 | 99.00th=[ 120], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 169], 00:24:21.374 | 99.99th=[ 169] 00:24:21.374 bw ( KiB/s): min= 634, max= 1266, per=4.16%, avg=904.20, stdev=169.42, samples=20 00:24:21.374 iops : min= 158, max= 316, avg=226.00, stdev=42.34, samples=20 00:24:21.374 lat (msec) : 50=25.60%, 100=65.35%, 250=9.05% 00:24:21.374 cpu : usr=31.12%, sys=1.75%, ctx=844, majf=0, minf=9 00:24:21.374 IO depths : 1=0.1%, 2=1.5%, 4=6.2%, 8=77.3%, 16=14.9%, 32=0.0%, >=64=0.0% 00:24:21.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.374 complete : 0=0.0%, 4=88.5%, 8=10.2%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.374 issued rwts: total=2277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.374 filename1: (groupid=0, jobs=1): err= 0: pid=97921: Tue Nov 26 16:29:45 2024 00:24:21.374 read: IOPS=228, BW=914KiB/s (936kB/s)(9156KiB/10019msec) 00:24:21.374 slat (usec): min=4, max=12023, avg=26.28, stdev=346.05 00:24:21.374 clat (msec): min=21, max=147, avg=69.92, stdev=19.74 00:24:21.374 lat (msec): min=21, max=147, avg=69.95, stdev=19.75 00:24:21.374 clat percentiles (msec): 00:24:21.374 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 50], 00:24:21.374 | 30.00th=[ 60], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:24:21.374 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 107], 00:24:21.374 | 99.00th=[ 115], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 148], 00:24:21.374 | 99.99th=[ 148] 00:24:21.374 bw ( KiB/s): min= 672, max= 1394, per=4.17%, avg=908.60, stdev=147.06, samples=20 00:24:21.374 iops : min= 168, max= 348, avg=227.10, stdev=36.67, samples=20 00:24:21.374 lat (msec) : 50=20.84%, 100=72.48%, 250=6.68% 00:24:21.374 cpu : usr=32.54%, sys=1.41%, ctx=1046, majf=0, minf=9 00:24:21.374 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:24:21.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.374 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.374 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.374 filename1: (groupid=0, jobs=1): err= 0: pid=97922: Tue Nov 26 16:29:45 2024 00:24:21.374 read: IOPS=230, BW=924KiB/s (946kB/s)(9284KiB/10049msec) 00:24:21.374 slat (usec): min=4, max=12020, avg=25.78, stdev=299.16 00:24:21.374 clat (msec): min=3, max=151, avg=69.08, stdev=25.64 00:24:21.374 lat (msec): min=3, max=151, avg=69.10, stdev=25.65 00:24:21.374 clat percentiles (msec): 00:24:21.374 | 1.00th=[ 4], 5.00th=[ 23], 10.00th=[ 36], 20.00th=[ 50], 00:24:21.374 | 30.00th=[ 57], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:24:21.374 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 108], 00:24:21.374 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 153], 00:24:21.374 | 99.99th=[ 153] 00:24:21.374 bw ( KiB/s): min= 592, max= 2304, per=4.24%, avg=922.00, stdev=348.06, samples=20 00:24:21.374 iops : min= 148, max= 576, avg=230.50, stdev=87.02, samples=20 00:24:21.374 lat (msec) : 4=1.38%, 10=1.98%, 20=1.46%, 50=17.32%, 100=68.42% 00:24:21.374 lat (msec) : 250=9.44% 00:24:21.374 cpu : usr=42.18%, sys=2.29%, ctx=1260, majf=0, minf=0 00:24:21.374 IO depths : 1=0.1%, 2=1.6%, 4=6.2%, 8=76.6%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:21.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.374 complete : 0=0.0%, 4=89.0%, 8=9.6%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.374 issued rwts: total=2321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.374 filename1: (groupid=0, jobs=1): err= 0: pid=97923: Tue Nov 26 16:29:45 2024 00:24:21.374 read: IOPS=230, BW=920KiB/s (942kB/s)(9204KiB/10002msec) 00:24:21.374 slat (usec): min=4, max=8026, avg=21.70, stdev=204.64 00:24:21.374 clat (usec): min=965, max=122971, avg=69439.77, stdev=22096.22 00:24:21.374 lat (usec): min=980, max=122985, avg=69461.47, stdev=22091.39 00:24:21.374 clat percentiles (usec): 00:24:21.374 | 1.00th=[ 1778], 5.00th=[ 36963], 10.00th=[ 47973], 20.00th=[ 48497], 00:24:21.374 | 30.00th=[ 56361], 40.00th=[ 64750], 50.00th=[ 71828], 60.00th=[ 73925], 00:24:21.374 | 70.00th=[ 81265], 80.00th=[ 85459], 90.00th=[ 96994], 95.00th=[106431], 00:24:21.374 | 99.00th=[108528], 99.50th=[120062], 99.90th=[123208], 99.95th=[123208], 00:24:21.374 | 99.99th=[123208] 00:24:21.374 bw ( KiB/s): min= 640, max= 1120, per=4.09%, avg=890.53, stdev=150.47, samples=19 00:24:21.374 iops : min= 160, max= 280, avg=222.63, stdev=37.62, samples=19 00:24:21.374 lat (usec) : 1000=0.04% 00:24:21.374 lat (msec) : 2=1.39%, 4=0.65%, 10=0.30%, 50=21.08%, 100=68.41% 00:24:21.374 lat (msec) : 250=8.13% 00:24:21.374 cpu : usr=33.92%, sys=1.88%, ctx=1040, majf=0, minf=9 00:24:21.374 IO depths : 1=0.1%, 2=1.9%, 4=7.4%, 8=75.9%, 16=14.6%, 32=0.0%, >=64=0.0% 00:24:21.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.375 complete : 0=0.0%, 4=88.9%, 8=9.5%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.375 issued rwts: total=2301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.375 filename1: (groupid=0, jobs=1): err= 0: pid=97924: Tue Nov 26 16:29:45 2024 00:24:21.375 read: IOPS=228, BW=913KiB/s (935kB/s)(9144KiB/10015msec) 00:24:21.375 slat (usec): min=4, max=8023, avg=22.03, stdev=202.26 00:24:21.375 clat (msec): min=25, max=144, avg=69.99, stdev=21.07 00:24:21.375 lat (msec): min=25, max=144, avg=70.01, stdev=21.07 00:24:21.375 clat percentiles (msec): 00:24:21.375 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 50], 00:24:21.375 | 30.00th=[ 55], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 74], 00:24:21.375 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:24:21.375 | 99.00th=[ 121], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:24:21.375 | 99.99th=[ 144] 00:24:21.375 bw ( KiB/s): min= 528, max= 1152, per=4.17%, avg=907.85, stdev=155.93, samples=20 00:24:21.375 iops : min= 132, max= 288, avg=226.95, stdev=39.01, samples=20 00:24:21.375 lat (msec) : 50=21.87%, 100=69.47%, 250=8.66% 00:24:21.375 cpu : usr=39.62%, sys=2.07%, ctx=1155, majf=0, minf=10 00:24:21.375 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=79.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:24:21.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.375 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.375 issued rwts: total=2286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.375 filename1: (groupid=0, jobs=1): err= 0: pid=97925: Tue Nov 26 16:29:45 2024 00:24:21.375 read: IOPS=222, BW=889KiB/s (910kB/s)(8928KiB/10046msec) 00:24:21.375 slat (usec): min=3, max=4024, avg=16.04, stdev=86.07 00:24:21.375 clat (msec): min=13, max=143, avg=71.84, stdev=22.10 00:24:21.375 lat (msec): min=13, max=143, avg=71.86, stdev=22.10 00:24:21.375 clat percentiles (msec): 00:24:21.375 | 1.00th=[ 18], 5.00th=[ 33], 10.00th=[ 48], 20.00th=[ 51], 00:24:21.375 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:24:21.375 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 100], 95.00th=[ 108], 00:24:21.375 | 99.00th=[ 121], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:24:21.375 | 99.99th=[ 144] 00:24:21.375 bw ( KiB/s): min= 528, max= 1552, per=4.07%, avg=886.00, stdev=201.93, samples=20 00:24:21.375 iops : min= 132, max= 388, avg=221.50, stdev=50.48, samples=20 00:24:21.375 lat (msec) : 20=1.43%, 50=18.37%, 100=70.74%, 250=9.45% 00:24:21.375 cpu : usr=33.73%, sys=1.95%, ctx=913, majf=0, minf=9 00:24:21.375 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:21.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.375 complete : 0=0.0%, 4=89.0%, 8=9.8%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.375 issued rwts: total=2232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.375 filename2: (groupid=0, jobs=1): err= 0: pid=97926: Tue Nov 26 16:29:45 2024 00:24:21.375 read: IOPS=227, BW=910KiB/s (932kB/s)(9148KiB/10049msec) 00:24:21.375 slat (usec): min=4, max=9022, avg=20.03, stdev=215.61 00:24:21.375 clat (msec): min=15, max=168, avg=70.12, stdev=23.15 00:24:21.375 lat (msec): min=15, max=168, avg=70.14, stdev=23.15 00:24:21.375 clat percentiles (msec): 00:24:21.375 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 46], 20.00th=[ 51], 00:24:21.375 | 30.00th=[ 56], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 75], 00:24:21.375 | 70.00th=[ 80], 80.00th=[ 86], 90.00th=[ 99], 95.00th=[ 108], 00:24:21.375 | 99.00th=[ 153], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 169], 00:24:21.375 | 99.99th=[ 169] 00:24:21.375 bw ( KiB/s): min= 540, max= 1648, per=4.17%, avg=907.80, stdev=216.61, samples=20 00:24:21.375 iops : min= 135, max= 412, avg=226.95, stdev=54.15, samples=20 00:24:21.375 lat (msec) : 20=0.61%, 50=20.16%, 100=70.79%, 250=8.44% 00:24:21.375 cpu : usr=43.23%, sys=2.48%, ctx=1478, majf=0, minf=9 00:24:21.375 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=79.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:21.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.375 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.375 issued rwts: total=2287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.375 filename2: (groupid=0, jobs=1): err= 0: pid=97927: Tue Nov 26 16:29:45 2024 00:24:21.375 read: IOPS=230, BW=922KiB/s (944kB/s)(9236KiB/10017msec) 00:24:21.375 slat (usec): min=3, max=8026, avg=35.58, stdev=382.12 00:24:21.375 clat (msec): min=25, max=138, avg=69.24, stdev=21.02 00:24:21.375 lat (msec): min=25, max=138, avg=69.27, stdev=21.01 00:24:21.375 clat percentiles (msec): 00:24:21.375 | 1.00th=[ 28], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 48], 00:24:21.375 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 73], 00:24:21.375 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 101], 95.00th=[ 106], 00:24:21.375 | 99.00th=[ 120], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 138], 00:24:21.375 | 99.99th=[ 138] 00:24:21.375 bw ( KiB/s): min= 634, max= 1280, per=4.21%, avg=916.90, stdev=161.06, samples=20 00:24:21.375 iops : min= 158, max= 320, avg=229.20, stdev=40.31, samples=20 00:24:21.375 lat (msec) : 50=25.64%, 100=64.70%, 250=9.66% 00:24:21.375 cpu : usr=34.07%, sys=2.17%, ctx=1002, majf=0, minf=9 00:24:21.375 IO depths : 1=0.1%, 2=1.5%, 4=5.8%, 8=77.7%, 16=14.9%, 32=0.0%, >=64=0.0% 00:24:21.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.375 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.375 issued rwts: total=2309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.375 filename2: (groupid=0, jobs=1): err= 0: pid=97928: Tue Nov 26 16:29:45 2024 00:24:21.375 read: IOPS=228, BW=916KiB/s (938kB/s)(9160KiB/10004msec) 00:24:21.375 slat (usec): min=3, max=8030, avg=20.89, stdev=236.81 00:24:21.375 clat (msec): min=7, max=119, avg=69.76, stdev=20.28 00:24:21.375 lat (msec): min=7, max=119, avg=69.78, stdev=20.28 00:24:21.375 clat percentiles (msec): 00:24:21.375 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 48], 00:24:21.375 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:24:21.375 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 107], 00:24:21.375 | 99.00th=[ 109], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:24:21.375 | 99.99th=[ 121] 00:24:21.375 bw ( KiB/s): min= 640, max= 1253, per=4.16%, avg=905.53, stdev=159.72, samples=19 00:24:21.375 iops : min= 160, max= 313, avg=226.37, stdev=39.90, samples=19 00:24:21.375 lat (msec) : 10=0.31%, 50=24.50%, 100=67.21%, 250=7.99% 00:24:21.375 cpu : usr=31.30%, sys=1.60%, ctx=843, majf=0, minf=9 00:24:21.375 IO depths : 1=0.1%, 2=1.6%, 4=6.2%, 8=77.2%, 16=14.9%, 32=0.0%, >=64=0.0% 00:24:21.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.375 complete : 0=0.0%, 4=88.5%, 8=10.1%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.375 issued rwts: total=2290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.375 filename2: (groupid=0, jobs=1): err= 0: pid=97929: Tue Nov 26 16:29:45 2024 00:24:21.375 read: IOPS=227, BW=909KiB/s (931kB/s)(9104KiB/10013msec) 00:24:21.375 slat (usec): min=4, max=8028, avg=24.05, stdev=238.51 00:24:21.375 clat (msec): min=23, max=130, avg=70.24, stdev=20.89 00:24:21.375 lat (msec): min=23, max=130, avg=70.26, stdev=20.90 00:24:21.375 clat percentiles (msec): 00:24:21.375 | 1.00th=[ 30], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 49], 00:24:21.375 | 30.00th=[ 56], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 74], 00:24:21.375 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 100], 95.00th=[ 107], 00:24:21.375 | 99.00th=[ 120], 99.50th=[ 131], 99.90th=[ 131], 99.95th=[ 131], 00:24:21.375 | 99.99th=[ 131] 00:24:21.375 bw ( KiB/s): min= 640, max= 1392, per=4.16%, avg=904.00, stdev=176.21, samples=20 00:24:21.375 iops : min= 160, max= 348, avg=226.00, stdev=44.05, samples=20 00:24:21.375 lat (msec) : 50=23.99%, 100=66.43%, 250=9.58% 00:24:21.375 cpu : usr=42.59%, sys=2.63%, ctx=1355, majf=0, minf=9 00:24:21.375 IO depths : 1=0.1%, 2=1.9%, 4=7.5%, 8=75.8%, 16=14.7%, 32=0.0%, >=64=0.0% 00:24:21.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.375 complete : 0=0.0%, 4=88.9%, 8=9.5%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.375 issued rwts: total=2276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.375 filename2: (groupid=0, jobs=1): err= 0: pid=97930: Tue Nov 26 16:29:45 2024 00:24:21.375 read: IOPS=228, BW=912KiB/s (934kB/s)(9148KiB/10028msec) 00:24:21.375 slat (usec): min=5, max=8024, avg=20.86, stdev=236.84 00:24:21.375 clat (msec): min=12, max=167, avg=70.00, stdev=20.28 00:24:21.375 lat (msec): min=12, max=167, avg=70.02, stdev=20.28 00:24:21.375 clat percentiles (msec): 00:24:21.375 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 50], 00:24:21.375 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 74], 00:24:21.375 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 107], 00:24:21.375 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 144], 00:24:21.375 | 99.99th=[ 167] 00:24:21.375 bw ( KiB/s): min= 674, max= 1472, per=4.18%, avg=910.50, stdev=167.65, samples=20 00:24:21.375 iops : min= 168, max= 368, avg=227.60, stdev=41.95, samples=20 00:24:21.375 lat (msec) : 20=0.31%, 50=20.99%, 100=71.27%, 250=7.43% 00:24:21.375 cpu : usr=31.73%, sys=1.83%, ctx=921, majf=0, minf=9 00:24:21.375 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.7%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:21.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.375 complete : 0=0.0%, 4=87.5%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.375 issued rwts: total=2287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.375 filename2: (groupid=0, jobs=1): err= 0: pid=97931: Tue Nov 26 16:29:45 2024 00:24:21.375 read: IOPS=224, BW=898KiB/s (920kB/s)(9016KiB/10037msec) 00:24:21.375 slat (usec): min=3, max=4027, avg=18.78, stdev=119.54 00:24:21.375 clat (msec): min=18, max=145, avg=71.07, stdev=21.61 00:24:21.375 lat (msec): min=18, max=145, avg=71.09, stdev=21.61 00:24:21.375 clat percentiles (msec): 00:24:21.375 | 1.00th=[ 23], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 50], 00:24:21.375 | 30.00th=[ 56], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:24:21.375 | 70.00th=[ 81], 80.00th=[ 91], 90.00th=[ 101], 95.00th=[ 108], 00:24:21.375 | 99.00th=[ 120], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 146], 00:24:21.375 | 99.99th=[ 146] 00:24:21.376 bw ( KiB/s): min= 528, max= 1520, per=4.11%, avg=895.20, stdev=199.72, samples=20 00:24:21.376 iops : min= 132, max= 380, avg=223.80, stdev=49.93, samples=20 00:24:21.376 lat (msec) : 20=0.62%, 50=20.54%, 100=68.59%, 250=10.25% 00:24:21.376 cpu : usr=43.62%, sys=2.48%, ctx=1342, majf=0, minf=9 00:24:21.376 IO depths : 1=0.1%, 2=2.0%, 4=7.6%, 8=75.5%, 16=14.8%, 32=0.0%, >=64=0.0% 00:24:21.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.376 complete : 0=0.0%, 4=89.1%, 8=9.3%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.376 issued rwts: total=2254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.376 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.376 filename2: (groupid=0, jobs=1): err= 0: pid=97932: Tue Nov 26 16:29:45 2024 00:24:21.376 read: IOPS=221, BW=888KiB/s (909kB/s)(8900KiB/10026msec) 00:24:21.376 slat (nsec): min=3780, max=43053, avg=15022.45, stdev=5087.16 00:24:21.376 clat (msec): min=19, max=144, avg=71.99, stdev=22.22 00:24:21.376 lat (msec): min=19, max=144, avg=72.01, stdev=22.22 00:24:21.376 clat percentiles (msec): 00:24:21.376 | 1.00th=[ 30], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 50], 00:24:21.376 | 30.00th=[ 58], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:24:21.376 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 103], 95.00th=[ 108], 00:24:21.376 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:24:21.376 | 99.99th=[ 144] 00:24:21.376 bw ( KiB/s): min= 528, max= 1282, per=4.06%, avg=883.60, stdev=165.37, samples=20 00:24:21.376 iops : min= 132, max= 320, avg=220.85, stdev=41.32, samples=20 00:24:21.376 lat (msec) : 20=0.09%, 50=21.21%, 100=68.54%, 250=10.16% 00:24:21.376 cpu : usr=34.37%, sys=1.77%, ctx=1018, majf=0, minf=9 00:24:21.376 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=76.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:24:21.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.376 complete : 0=0.0%, 4=89.0%, 8=9.4%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.376 issued rwts: total=2225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.376 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.376 filename2: (groupid=0, jobs=1): err= 0: pid=97933: Tue Nov 26 16:29:45 2024 00:24:21.376 read: IOPS=235, BW=942KiB/s (964kB/s)(9444KiB/10030msec) 00:24:21.376 slat (usec): min=4, max=8029, avg=23.07, stdev=247.37 00:24:21.376 clat (msec): min=12, max=131, avg=67.81, stdev=19.90 00:24:21.376 lat (msec): min=12, max=131, avg=67.83, stdev=19.90 00:24:21.376 clat percentiles (msec): 00:24:21.376 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 48], 00:24:21.376 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:24:21.376 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 104], 00:24:21.376 | 99.00th=[ 115], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:24:21.376 | 99.99th=[ 132] 00:24:21.376 bw ( KiB/s): min= 712, max= 1416, per=4.32%, avg=940.40, stdev=144.27, samples=20 00:24:21.376 iops : min= 178, max= 354, avg=235.10, stdev=36.07, samples=20 00:24:21.376 lat (msec) : 20=0.51%, 50=24.52%, 100=69.55%, 250=5.42% 00:24:21.376 cpu : usr=33.11%, sys=1.96%, ctx=999, majf=0, minf=9 00:24:21.376 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:21.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.376 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.376 issued rwts: total=2361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.376 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:21.376 00:24:21.376 Run status group 0 (all jobs): 00:24:21.376 READ: bw=21.2MiB/s (22.3MB/s), 838KiB/s-943KiB/s (858kB/s-965kB/s), io=214MiB (224MB), run=10002-10056msec 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.376 bdev_null0 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.376 [2024-11-26 16:29:45.522089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.376 bdev_null1 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.376 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:21.377 { 00:24:21.377 "params": { 00:24:21.377 "name": "Nvme$subsystem", 00:24:21.377 "trtype": "$TEST_TRANSPORT", 00:24:21.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.377 "adrfam": "ipv4", 00:24:21.377 "trsvcid": "$NVMF_PORT", 00:24:21.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.377 "hdgst": ${hdgst:-false}, 00:24:21.377 "ddgst": ${ddgst:-false} 00:24:21.377 }, 00:24:21.377 "method": "bdev_nvme_attach_controller" 00:24:21.377 } 00:24:21.377 EOF 00:24:21.377 )") 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:21.377 { 00:24:21.377 "params": { 00:24:21.377 "name": "Nvme$subsystem", 00:24:21.377 "trtype": "$TEST_TRANSPORT", 00:24:21.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.377 "adrfam": "ipv4", 00:24:21.377 "trsvcid": "$NVMF_PORT", 00:24:21.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.377 "hdgst": ${hdgst:-false}, 00:24:21.377 "ddgst": ${ddgst:-false} 00:24:21.377 }, 00:24:21.377 "method": "bdev_nvme_attach_controller" 00:24:21.377 } 00:24:21.377 EOF 00:24:21.377 )") 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:21.377 "params": { 00:24:21.377 "name": "Nvme0", 00:24:21.377 "trtype": "tcp", 00:24:21.377 "traddr": "10.0.0.3", 00:24:21.377 "adrfam": "ipv4", 00:24:21.377 "trsvcid": "4420", 00:24:21.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:21.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:21.377 "hdgst": false, 00:24:21.377 "ddgst": false 00:24:21.377 }, 00:24:21.377 "method": "bdev_nvme_attach_controller" 00:24:21.377 },{ 00:24:21.377 "params": { 00:24:21.377 "name": "Nvme1", 00:24:21.377 "trtype": "tcp", 00:24:21.377 "traddr": "10.0.0.3", 00:24:21.377 "adrfam": "ipv4", 00:24:21.377 "trsvcid": "4420", 00:24:21.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.377 "hdgst": false, 00:24:21.377 "ddgst": false 00:24:21.377 }, 00:24:21.377 "method": "bdev_nvme_attach_controller" 00:24:21.377 }' 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:21.377 16:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:21.377 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:21.377 ... 00:24:21.377 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:21.377 ... 00:24:21.377 fio-3.35 00:24:21.377 Starting 4 threads 00:24:26.648 00:24:26.648 filename0: (groupid=0, jobs=1): err= 0: pid=98073: Tue Nov 26 16:29:51 2024 00:24:26.648 read: IOPS=2171, BW=17.0MiB/s (17.8MB/s)(84.8MiB/5001msec) 00:24:26.648 slat (nsec): min=6685, max=64948, avg=14680.70, stdev=5300.22 00:24:26.648 clat (usec): min=792, max=6684, avg=3637.38, stdev=769.47 00:24:26.648 lat (usec): min=800, max=6710, avg=3652.06, stdev=769.41 00:24:26.648 clat percentiles (usec): 00:24:26.648 | 1.00th=[ 1647], 5.00th=[ 1991], 10.00th=[ 2409], 20.00th=[ 2900], 00:24:26.648 | 30.00th=[ 3556], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 3916], 00:24:26.648 | 70.00th=[ 4015], 80.00th=[ 4178], 90.00th=[ 4424], 95.00th=[ 4555], 00:24:26.648 | 99.00th=[ 4883], 99.50th=[ 4948], 99.90th=[ 5342], 99.95th=[ 6456], 00:24:26.648 | 99.99th=[ 6521] 00:24:26.648 bw ( KiB/s): min=15616, max=19200, per=25.09%, avg=17498.67, stdev=1472.04, samples=9 00:24:26.648 iops : min= 1952, max= 2400, avg=2187.33, stdev=184.01, samples=9 00:24:26.648 lat (usec) : 1000=0.30% 00:24:26.648 lat (msec) : 2=4.90%, 4=63.78%, 10=31.02% 00:24:26.648 cpu : usr=91.12%, sys=7.94%, ctx=8, majf=0, minf=10 00:24:26.648 IO depths : 1=0.1%, 2=12.8%, 4=57.3%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:26.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.648 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.648 issued rwts: total=10859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.648 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:26.648 filename0: (groupid=0, jobs=1): err= 0: pid=98074: Tue Nov 26 16:29:51 2024 00:24:26.648 read: IOPS=2108, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5003msec) 00:24:26.648 slat (nsec): min=6772, max=61538, avg=13884.53, stdev=5171.49 00:24:26.648 clat (usec): min=1277, max=6299, avg=3747.41, stdev=712.52 00:24:26.648 lat (usec): min=1286, max=6322, avg=3761.29, stdev=713.04 00:24:26.648 clat percentiles (usec): 00:24:26.648 | 1.00th=[ 1876], 5.00th=[ 2073], 10.00th=[ 2704], 20.00th=[ 3130], 00:24:26.648 | 30.00th=[ 3654], 40.00th=[ 3851], 50.00th=[ 3916], 60.00th=[ 3949], 00:24:26.648 | 70.00th=[ 4080], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4621], 00:24:26.648 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5604], 99.95th=[ 6063], 00:24:26.648 | 99.99th=[ 6128] 00:24:26.648 bw ( KiB/s): min=15360, max=18912, per=24.32%, avg=16961.78, stdev=1352.27, samples=9 00:24:26.648 iops : min= 1920, max= 2364, avg=2120.22, stdev=169.03, samples=9 00:24:26.648 lat (msec) : 2=4.04%, 4=59.91%, 10=36.05% 00:24:26.648 cpu : usr=91.86%, sys=7.22%, ctx=24, majf=0, minf=9 00:24:26.648 IO depths : 1=0.1%, 2=15.0%, 4=56.1%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:26.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.648 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.648 issued rwts: total=10549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.648 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:26.648 filename1: (groupid=0, jobs=1): err= 0: pid=98075: Tue Nov 26 16:29:51 2024 00:24:26.648 read: IOPS=2331, BW=18.2MiB/s (19.1MB/s)(91.1MiB/5001msec) 00:24:26.648 slat (nsec): min=4729, max=60578, avg=10487.11, stdev=4651.33 00:24:26.648 clat (usec): min=619, max=11527, avg=3399.18, stdev=1134.39 00:24:26.648 lat (usec): min=627, max=11543, avg=3409.66, stdev=1134.64 00:24:26.648 clat percentiles (usec): 00:24:26.648 | 1.00th=[ 1237], 5.00th=[ 1303], 10.00th=[ 1369], 20.00th=[ 2638], 00:24:26.648 | 30.00th=[ 2999], 40.00th=[ 3490], 50.00th=[ 3687], 60.00th=[ 3818], 00:24:26.648 | 70.00th=[ 3982], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4752], 00:24:26.648 | 99.00th=[ 5145], 99.50th=[ 5604], 99.90th=[ 6456], 99.95th=[11338], 00:24:26.648 | 99.99th=[11338] 00:24:26.648 bw ( KiB/s): min=13568, max=22176, per=26.21%, avg=18282.00, stdev=3479.94, samples=9 00:24:26.648 iops : min= 1696, max= 2772, avg=2285.22, stdev=434.99, samples=9 00:24:26.648 lat (usec) : 750=0.03%, 1000=0.33% 00:24:26.648 lat (msec) : 2=15.73%, 4=54.12%, 10=29.72%, 20=0.07% 00:24:26.648 cpu : usr=90.18%, sys=8.78%, ctx=7, majf=0, minf=9 00:24:26.648 IO depths : 1=0.1%, 2=7.0%, 4=60.3%, 8=32.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:26.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.648 complete : 0=0.0%, 4=97.3%, 8=2.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.648 issued rwts: total=11661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.648 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:26.648 filename1: (groupid=0, jobs=1): err= 0: pid=98076: Tue Nov 26 16:29:51 2024 00:24:26.648 read: IOPS=2108, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5002msec) 00:24:26.648 slat (nsec): min=7032, max=63439, avg=15132.24, stdev=4890.87 00:24:26.648 clat (usec): min=1222, max=5713, avg=3742.24, stdev=710.57 00:24:26.648 lat (usec): min=1237, max=5727, avg=3757.37, stdev=710.42 00:24:26.648 clat percentiles (usec): 00:24:26.648 | 1.00th=[ 1860], 5.00th=[ 2040], 10.00th=[ 2704], 20.00th=[ 3130], 00:24:26.648 | 30.00th=[ 3654], 40.00th=[ 3851], 50.00th=[ 3916], 60.00th=[ 3949], 00:24:26.648 | 70.00th=[ 4080], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4621], 00:24:26.648 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5538], 99.95th=[ 5604], 00:24:26.648 | 99.99th=[ 5735] 00:24:26.648 bw ( KiB/s): min=15360, max=18912, per=24.32%, avg=16961.78, stdev=1350.95, samples=9 00:24:26.648 iops : min= 1920, max= 2364, avg=2120.22, stdev=168.87, samples=9 00:24:26.648 lat (msec) : 2=4.28%, 4=60.30%, 10=35.43% 00:24:26.648 cpu : usr=91.66%, sys=7.40%, ctx=4, majf=0, minf=9 00:24:26.648 IO depths : 1=0.1%, 2=15.0%, 4=56.1%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:26.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.648 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.648 issued rwts: total=10549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.648 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:26.648 00:24:26.648 Run status group 0 (all jobs): 00:24:26.648 READ: bw=68.1MiB/s (71.4MB/s), 16.5MiB/s-18.2MiB/s (17.3MB/s-19.1MB/s), io=341MiB (357MB), run=5001-5003msec 00:24:26.648 16:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:26.648 16:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:26.648 16:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:26.648 16:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:26.648 16:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:26.648 16:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:26.648 16:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.648 16:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.648 16:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.648 16:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:26.648 16:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.648 16:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.648 16:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.648 16:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:26.649 16:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:26.649 16:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:26.649 16:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.649 16:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.649 16:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.649 16:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.649 16:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:26.649 16:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.649 16:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.649 ************************************ 00:24:26.649 END TEST fio_dif_rand_params 00:24:26.649 ************************************ 00:24:26.649 16:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.649 00:24:26.649 real 0m23.098s 00:24:26.649 user 2m2.890s 00:24:26.649 sys 0m8.383s 00:24:26.649 16:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.649 16:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:26.649 16:29:51 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:26.649 16:29:51 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:26.649 16:29:51 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.649 16:29:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:26.649 ************************************ 00:24:26.649 START TEST fio_dif_digest 00:24:26.649 ************************************ 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:26.649 bdev_null0 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:26.649 [2024-11-26 16:29:51.554199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:26.649 { 00:24:26.649 "params": { 00:24:26.649 "name": "Nvme$subsystem", 00:24:26.649 "trtype": "$TEST_TRANSPORT", 00:24:26.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:26.649 "adrfam": "ipv4", 00:24:26.649 "trsvcid": "$NVMF_PORT", 00:24:26.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:26.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:26.649 "hdgst": ${hdgst:-false}, 00:24:26.649 "ddgst": ${ddgst:-false} 00:24:26.649 }, 00:24:26.649 "method": "bdev_nvme_attach_controller" 00:24:26.649 } 00:24:26.649 EOF 00:24:26.649 )") 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:24:26.649 16:29:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:26.650 "params": { 00:24:26.650 "name": "Nvme0", 00:24:26.650 "trtype": "tcp", 00:24:26.650 "traddr": "10.0.0.3", 00:24:26.650 "adrfam": "ipv4", 00:24:26.650 "trsvcid": "4420", 00:24:26.650 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:26.650 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:26.650 "hdgst": true, 00:24:26.650 "ddgst": true 00:24:26.650 }, 00:24:26.650 "method": "bdev_nvme_attach_controller" 00:24:26.650 }' 00:24:26.650 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:26.650 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:26.650 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:26.650 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:26.650 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:26.650 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:26.650 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:26.650 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:26.650 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:26.650 16:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:26.650 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:26.650 ... 00:24:26.650 fio-3.35 00:24:26.650 Starting 3 threads 00:24:36.629 00:24:36.629 filename0: (groupid=0, jobs=1): err= 0: pid=98182: Tue Nov 26 16:30:02 2024 00:24:36.629 read: IOPS=244, BW=30.6MiB/s (32.1MB/s)(306MiB/10005msec) 00:24:36.629 slat (nsec): min=6797, max=49548, avg=9446.41, stdev=3635.08 00:24:36.629 clat (usec): min=4403, max=14233, avg=12222.22, stdev=546.29 00:24:36.629 lat (usec): min=4410, max=14245, avg=12231.67, stdev=546.59 00:24:36.629 clat percentiles (usec): 00:24:36.629 | 1.00th=[11731], 5.00th=[11731], 10.00th=[11863], 20.00th=[11863], 00:24:36.629 | 30.00th=[11863], 40.00th=[11994], 50.00th=[11994], 60.00th=[12256], 00:24:36.629 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12911], 95.00th=[13173], 00:24:36.629 | 99.00th=[13829], 99.50th=[13960], 99.90th=[14222], 99.95th=[14222], 00:24:36.629 | 99.99th=[14222] 00:24:36.629 bw ( KiB/s): min=29952, max=32256, per=33.46%, avg=31447.58, stdev=598.94, samples=19 00:24:36.629 iops : min= 234, max= 252, avg=245.68, stdev= 4.68, samples=19 00:24:36.629 lat (msec) : 10=0.12%, 20=99.88% 00:24:36.629 cpu : usr=91.53%, sys=7.88%, ctx=31, majf=0, minf=0 00:24:36.629 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:36.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:36.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:36.629 issued rwts: total=2451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:36.629 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:36.629 filename0: (groupid=0, jobs=1): err= 0: pid=98183: Tue Nov 26 16:30:02 2024 00:24:36.629 read: IOPS=244, BW=30.6MiB/s (32.1MB/s)(306MiB/10001msec) 00:24:36.629 slat (nsec): min=6756, max=52994, avg=9417.25, stdev=3819.22 00:24:36.629 clat (usec): min=11068, max=14434, avg=12232.28, stdev=469.89 00:24:36.629 lat (usec): min=11075, max=14447, avg=12241.70, stdev=470.20 00:24:36.629 clat percentiles (usec): 00:24:36.629 | 1.00th=[11731], 5.00th=[11731], 10.00th=[11863], 20.00th=[11863], 00:24:36.629 | 30.00th=[11863], 40.00th=[11994], 50.00th=[11994], 60.00th=[12256], 00:24:36.629 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12911], 95.00th=[13173], 00:24:36.629 | 99.00th=[13698], 99.50th=[13829], 99.90th=[14353], 99.95th=[14353], 00:24:36.629 | 99.99th=[14484] 00:24:36.629 bw ( KiB/s): min=29952, max=32256, per=33.41%, avg=31403.84, stdev=671.93, samples=19 00:24:36.629 iops : min= 234, max= 252, avg=245.32, stdev= 5.25, samples=19 00:24:36.629 lat (msec) : 20=100.00% 00:24:36.629 cpu : usr=91.64%, sys=7.77%, ctx=16, majf=0, minf=9 00:24:36.629 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:36.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:36.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:36.629 issued rwts: total=2448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:36.629 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:36.629 filename0: (groupid=0, jobs=1): err= 0: pid=98184: Tue Nov 26 16:30:02 2024 00:24:36.629 read: IOPS=244, BW=30.6MiB/s (32.1MB/s)(306MiB/10002msec) 00:24:36.629 slat (nsec): min=6765, max=39634, avg=9622.95, stdev=3943.93 00:24:36.629 clat (usec): min=7117, max=17491, avg=12233.62, stdev=534.20 00:24:36.629 lat (usec): min=7125, max=17515, avg=12243.24, stdev=534.72 00:24:36.629 clat percentiles (usec): 00:24:36.629 | 1.00th=[11731], 5.00th=[11731], 10.00th=[11863], 20.00th=[11863], 00:24:36.629 | 30.00th=[11863], 40.00th=[11994], 50.00th=[11994], 60.00th=[12256], 00:24:36.629 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12911], 95.00th=[13304], 00:24:36.629 | 99.00th=[13698], 99.50th=[13829], 99.90th=[17433], 99.95th=[17433], 00:24:36.629 | 99.99th=[17433] 00:24:36.629 bw ( KiB/s): min=29952, max=32256, per=33.37%, avg=31363.42, stdev=735.46, samples=19 00:24:36.629 iops : min= 234, max= 252, avg=245.00, stdev= 5.74, samples=19 00:24:36.629 lat (msec) : 10=0.12%, 20=99.88% 00:24:36.629 cpu : usr=91.09%, sys=8.10%, ctx=86, majf=0, minf=0 00:24:36.629 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:36.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:36.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:36.629 issued rwts: total=2448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:36.629 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:36.629 00:24:36.629 Run status group 0 (all jobs): 00:24:36.629 READ: bw=91.8MiB/s (96.2MB/s), 30.6MiB/s-30.6MiB/s (32.1MB/s-32.1MB/s), io=918MiB (963MB), run=10001-10005msec 00:24:36.888 16:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:36.888 16:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:36.888 16:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:36.888 16:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:36.889 16:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:36.889 16:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:36.889 16:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.889 16:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:36.889 16:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.889 16:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:36.889 16:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.889 16:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:36.889 16:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.889 00:24:36.889 real 0m10.867s 00:24:36.889 user 0m28.013s 00:24:36.889 sys 0m2.577s 00:24:36.889 16:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.889 16:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:36.889 ************************************ 00:24:36.889 END TEST fio_dif_digest 00:24:36.889 ************************************ 00:24:36.889 16:30:02 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:36.889 16:30:02 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:36.889 16:30:02 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:36.889 16:30:02 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:24:36.889 16:30:02 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:36.889 16:30:02 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:24:36.889 16:30:02 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:36.889 16:30:02 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:36.889 rmmod nvme_tcp 00:24:36.889 rmmod nvme_fabrics 00:24:36.889 rmmod nvme_keyring 00:24:36.889 16:30:02 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:36.889 16:30:02 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:24:36.889 16:30:02 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:24:36.889 16:30:02 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 97447 ']' 00:24:36.889 16:30:02 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 97447 00:24:36.889 16:30:02 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 97447 ']' 00:24:36.889 16:30:02 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 97447 00:24:37.147 16:30:02 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:24:37.147 16:30:02 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:37.147 16:30:02 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97447 00:24:37.147 killing process with pid 97447 00:24:37.147 16:30:02 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:37.147 16:30:02 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:37.147 16:30:02 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97447' 00:24:37.147 16:30:02 nvmf_dif -- common/autotest_common.sh@973 -- # kill 97447 00:24:37.147 16:30:02 nvmf_dif -- common/autotest_common.sh@978 -- # wait 97447 00:24:37.148 16:30:02 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:24:37.148 16:30:02 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:37.406 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:37.406 Waiting for block devices as requested 00:24:37.664 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:37.664 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:37.664 16:30:03 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:37.664 16:30:03 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:37.664 16:30:03 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:24:37.664 16:30:03 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:24:37.664 16:30:03 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:24:37.665 16:30:03 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:37.665 16:30:03 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:37.665 16:30:03 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:37.665 16:30:03 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:37.665 16:30:03 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:37.665 16:30:03 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:37.665 16:30:03 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:37.665 16:30:03 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:37.665 16:30:03 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:37.923 16:30:03 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:37.923 16:30:03 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:37.923 16:30:03 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:37.923 16:30:03 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:37.923 16:30:03 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:37.923 16:30:03 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:37.923 16:30:03 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:37.923 16:30:03 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:37.923 16:30:03 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.923 16:30:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:37.923 16:30:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.923 16:30:03 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:24:37.923 ************************************ 00:24:37.923 END TEST nvmf_dif 00:24:37.923 ************************************ 00:24:37.923 00:24:37.923 real 0m58.449s 00:24:37.923 user 3m45.717s 00:24:37.923 sys 0m19.237s 00:24:37.923 16:30:03 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.923 16:30:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:37.923 16:30:03 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:37.923 16:30:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:37.923 16:30:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.923 16:30:03 -- common/autotest_common.sh@10 -- # set +x 00:24:37.923 ************************************ 00:24:37.923 START TEST nvmf_abort_qd_sizes 00:24:37.923 ************************************ 00:24:37.923 16:30:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:38.183 * Looking for test storage... 00:24:38.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:38.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.183 --rc genhtml_branch_coverage=1 00:24:38.183 --rc genhtml_function_coverage=1 00:24:38.183 --rc genhtml_legend=1 00:24:38.183 --rc geninfo_all_blocks=1 00:24:38.183 --rc geninfo_unexecuted_blocks=1 00:24:38.183 00:24:38.183 ' 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:38.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.183 --rc genhtml_branch_coverage=1 00:24:38.183 --rc genhtml_function_coverage=1 00:24:38.183 --rc genhtml_legend=1 00:24:38.183 --rc geninfo_all_blocks=1 00:24:38.183 --rc geninfo_unexecuted_blocks=1 00:24:38.183 00:24:38.183 ' 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:38.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.183 --rc genhtml_branch_coverage=1 00:24:38.183 --rc genhtml_function_coverage=1 00:24:38.183 --rc genhtml_legend=1 00:24:38.183 --rc geninfo_all_blocks=1 00:24:38.183 --rc geninfo_unexecuted_blocks=1 00:24:38.183 00:24:38.183 ' 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:38.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.183 --rc genhtml_branch_coverage=1 00:24:38.183 --rc genhtml_function_coverage=1 00:24:38.183 --rc genhtml_legend=1 00:24:38.183 --rc geninfo_all_blocks=1 00:24:38.183 --rc geninfo_unexecuted_blocks=1 00:24:38.183 00:24:38.183 ' 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.183 16:30:03 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:38.184 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:38.184 Cannot find device "nvmf_init_br" 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:38.184 Cannot find device "nvmf_init_br2" 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:38.184 Cannot find device "nvmf_tgt_br" 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:38.184 Cannot find device "nvmf_tgt_br2" 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:38.184 Cannot find device "nvmf_init_br" 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:24:38.184 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:38.443 Cannot find device "nvmf_init_br2" 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:38.443 Cannot find device "nvmf_tgt_br" 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:38.443 Cannot find device "nvmf_tgt_br2" 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:38.443 Cannot find device "nvmf_br" 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:38.443 Cannot find device "nvmf_init_if" 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:38.443 Cannot find device "nvmf_init_if2" 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:38.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:38.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:38.443 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:38.444 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:38.444 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:38.444 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:38.444 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:38.444 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:38.444 16:30:03 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:38.444 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:38.444 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:38.444 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:38.444 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:38.444 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:38.444 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:38.444 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:38.444 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:38.444 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:38.444 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:38.444 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:38.444 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:38.703 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:38.703 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:38.703 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:38.703 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:38.703 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:38.703 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:38.703 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:38.703 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:38.703 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:24:38.703 00:24:38.703 --- 10.0.0.3 ping statistics --- 00:24:38.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.703 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:24:38.703 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:38.703 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:38.703 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:24:38.703 00:24:38.703 --- 10.0.0.4 ping statistics --- 00:24:38.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.703 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:38.703 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:38.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:24:38.703 00:24:38.703 --- 10.0.0.1 ping statistics --- 00:24:38.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.703 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:24:38.703 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:38.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:24:38.703 00:24:38.703 --- 10.0.0.2 ping statistics --- 00:24:38.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.703 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:24:38.703 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.703 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:24:38.703 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:24:38.703 16:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:39.271 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:39.271 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:39.530 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:39.530 16:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.530 16:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:39.530 16:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:39.530 16:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.530 16:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:39.530 16:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:39.530 16:30:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:39.530 16:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:39.530 16:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:39.530 16:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:39.530 16:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=98842 00:24:39.530 16:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:39.530 16:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 98842 00:24:39.530 16:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 98842 ']' 00:24:39.531 16:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.531 16:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.531 16:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.531 16:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.531 16:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:39.531 [2024-11-26 16:30:05.100497] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:24:39.531 [2024-11-26 16:30:05.100590] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.790 [2024-11-26 16:30:05.250321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:39.790 [2024-11-26 16:30:05.277406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.790 [2024-11-26 16:30:05.277465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.790 [2024-11-26 16:30:05.277479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.790 [2024-11-26 16:30:05.277489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.790 [2024-11-26 16:30:05.277497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.790 [2024-11-26 16:30:05.278427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.790 [2024-11-26 16:30:05.278574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.790 [2024-11-26 16:30:05.279294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.790 [2024-11-26 16:30:05.279260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:39.790 [2024-11-26 16:30:05.315777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:24:39.790 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:39.791 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:24:39.791 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:39.791 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:39.791 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:24:39.791 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:39.791 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:24:39.791 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:39.791 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:24:39.791 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:39.791 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:39.791 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:24:39.791 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:39.791 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:39.791 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:40.050 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:40.050 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:40.050 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:40.050 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:40.050 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:40.050 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:40.050 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:40.050 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:24:40.050 16:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:40.050 16:30:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:40.050 16:30:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:40.050 16:30:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:40.050 16:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:40.050 16:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:40.050 16:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:40.050 ************************************ 00:24:40.050 START TEST spdk_target_abort 00:24:40.050 ************************************ 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:40.050 spdk_targetn1 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:40.050 [2024-11-26 16:30:05.523653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:40.050 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:40.051 [2024-11-26 16:30:05.564533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:40.051 16:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:43.357 Initializing NVMe Controllers 00:24:43.357 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:43.357 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:43.357 Initialization complete. Launching workers. 00:24:43.357 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9660, failed: 0 00:24:43.357 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1088, failed to submit 8572 00:24:43.357 success 900, unsuccessful 188, failed 0 00:24:43.357 16:30:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:43.357 16:30:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:46.643 Initializing NVMe Controllers 00:24:46.643 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:46.643 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:46.643 Initialization complete. Launching workers. 00:24:46.643 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8923, failed: 0 00:24:46.643 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1137, failed to submit 7786 00:24:46.643 success 377, unsuccessful 760, failed 0 00:24:46.643 16:30:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:46.643 16:30:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:49.932 Initializing NVMe Controllers 00:24:49.932 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:49.932 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:49.932 Initialization complete. Launching workers. 00:24:49.932 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31428, failed: 0 00:24:49.932 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2362, failed to submit 29066 00:24:49.932 success 476, unsuccessful 1886, failed 0 00:24:49.932 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:49.932 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.932 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:49.932 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.932 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:49.932 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.932 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:50.192 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.192 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98842 00:24:50.192 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 98842 ']' 00:24:50.192 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 98842 00:24:50.192 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:24:50.192 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.192 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98842 00:24:50.192 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:50.192 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:50.192 killing process with pid 98842 00:24:50.192 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98842' 00:24:50.192 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 98842 00:24:50.192 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 98842 00:24:50.452 00:24:50.452 real 0m10.389s 00:24:50.452 user 0m39.895s 00:24:50.452 sys 0m1.966s 00:24:50.452 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.452 16:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:50.452 ************************************ 00:24:50.452 END TEST spdk_target_abort 00:24:50.452 ************************************ 00:24:50.452 16:30:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:50.452 16:30:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:50.452 16:30:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:50.452 16:30:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:50.452 ************************************ 00:24:50.452 START TEST kernel_target_abort 00:24:50.452 ************************************ 00:24:50.452 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:24:50.452 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:50.452 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:24:50.452 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.452 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.452 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.452 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.453 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:50.453 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.453 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:50.453 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:50.453 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:50.453 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:50.453 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:50.453 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:50.453 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:50.453 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:50.453 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:50.453 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:24:50.453 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:50.453 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:50.453 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:50.453 16:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:50.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:50.712 Waiting for block devices as requested 00:24:50.712 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:50.971 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:50.971 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:50.971 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:50.971 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:50.971 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:50.971 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:50.971 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:50.971 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:50.971 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:50.971 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:50.971 No valid GPT data, bailing 00:24:50.971 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:50.972 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:50.972 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:50.972 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:50.972 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:50.972 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:50.972 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:24:50.972 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:24:50.972 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:50.972 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:50.972 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:24:50.972 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:50.972 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:51.232 No valid GPT data, bailing 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:51.232 No valid GPT data, bailing 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:51.232 No valid GPT data, bailing 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a --hostid=088cee68-288e-4cf6-92d0-e6cd1eb4210a -a 10.0.0.1 -t tcp -s 4420 00:24:51.232 00:24:51.232 Discovery Log Number of Records 2, Generation counter 2 00:24:51.232 =====Discovery Log Entry 0====== 00:24:51.232 trtype: tcp 00:24:51.232 adrfam: ipv4 00:24:51.232 subtype: current discovery subsystem 00:24:51.232 treq: not specified, sq flow control disable supported 00:24:51.232 portid: 1 00:24:51.232 trsvcid: 4420 00:24:51.232 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:51.232 traddr: 10.0.0.1 00:24:51.232 eflags: none 00:24:51.232 sectype: none 00:24:51.232 =====Discovery Log Entry 1====== 00:24:51.232 trtype: tcp 00:24:51.232 adrfam: ipv4 00:24:51.232 subtype: nvme subsystem 00:24:51.232 treq: not specified, sq flow control disable supported 00:24:51.232 portid: 1 00:24:51.232 trsvcid: 4420 00:24:51.232 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:51.232 traddr: 10.0.0.1 00:24:51.232 eflags: none 00:24:51.232 sectype: none 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:51.232 16:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:54.529 Initializing NVMe Controllers 00:24:54.529 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:54.529 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:54.529 Initialization complete. Launching workers. 00:24:54.529 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31868, failed: 0 00:24:54.529 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31868, failed to submit 0 00:24:54.529 success 0, unsuccessful 31868, failed 0 00:24:54.529 16:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:54.529 16:30:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:57.860 Initializing NVMe Controllers 00:24:57.860 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:57.860 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:57.860 Initialization complete. Launching workers. 00:24:57.860 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61697, failed: 0 00:24:57.860 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25317, failed to submit 36380 00:24:57.860 success 0, unsuccessful 25317, failed 0 00:24:57.860 16:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:57.861 16:30:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:01.150 Initializing NVMe Controllers 00:25:01.150 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:01.150 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:01.150 Initialization complete. Launching workers. 00:25:01.150 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67454, failed: 0 00:25:01.150 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16846, failed to submit 50608 00:25:01.150 success 0, unsuccessful 16846, failed 0 00:25:01.150 16:30:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:25:01.150 16:30:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:01.150 16:30:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:25:01.150 16:30:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:01.150 16:30:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:01.150 16:30:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:01.150 16:30:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:01.150 16:30:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:01.150 16:30:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:01.150 16:30:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:01.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:02.348 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:02.348 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:02.348 00:25:02.348 real 0m11.921s 00:25:02.348 user 0m5.599s 00:25:02.348 sys 0m3.671s 00:25:02.348 16:30:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:02.348 ************************************ 00:25:02.348 END TEST kernel_target_abort 00:25:02.348 ************************************ 00:25:02.348 16:30:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:02.348 rmmod nvme_tcp 00:25:02.348 rmmod nvme_fabrics 00:25:02.348 rmmod nvme_keyring 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 98842 ']' 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 98842 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 98842 ']' 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 98842 00:25:02.348 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (98842) - No such process 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 98842 is not found' 00:25:02.348 Process with pid 98842 is not found 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:25:02.348 16:30:27 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:02.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:02.916 Waiting for block devices as requested 00:25:02.916 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:02.916 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:25:03.175 00:25:03.175 real 0m25.282s 00:25:03.175 user 0m46.665s 00:25:03.175 sys 0m7.053s 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.175 16:30:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:03.175 ************************************ 00:25:03.175 END TEST nvmf_abort_qd_sizes 00:25:03.175 ************************************ 00:25:03.435 16:30:28 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:03.435 16:30:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:03.435 16:30:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.435 16:30:28 -- common/autotest_common.sh@10 -- # set +x 00:25:03.435 ************************************ 00:25:03.435 START TEST keyring_file 00:25:03.435 ************************************ 00:25:03.435 16:30:28 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:03.435 * Looking for test storage... 00:25:03.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:03.435 16:30:28 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:03.435 16:30:28 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:25:03.435 16:30:28 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:03.435 16:30:29 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:03.435 16:30:29 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.435 16:30:29 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.435 16:30:29 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.435 16:30:29 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.435 16:30:29 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.435 16:30:29 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@345 -- # : 1 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@353 -- # local d=1 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@355 -- # echo 1 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@353 -- # local d=2 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@355 -- # echo 2 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@368 -- # return 0 00:25:03.436 16:30:29 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.436 16:30:29 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:03.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.436 --rc genhtml_branch_coverage=1 00:25:03.436 --rc genhtml_function_coverage=1 00:25:03.436 --rc genhtml_legend=1 00:25:03.436 --rc geninfo_all_blocks=1 00:25:03.436 --rc geninfo_unexecuted_blocks=1 00:25:03.436 00:25:03.436 ' 00:25:03.436 16:30:29 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:03.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.436 --rc genhtml_branch_coverage=1 00:25:03.436 --rc genhtml_function_coverage=1 00:25:03.436 --rc genhtml_legend=1 00:25:03.436 --rc geninfo_all_blocks=1 00:25:03.436 --rc geninfo_unexecuted_blocks=1 00:25:03.436 00:25:03.436 ' 00:25:03.436 16:30:29 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:03.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.436 --rc genhtml_branch_coverage=1 00:25:03.436 --rc genhtml_function_coverage=1 00:25:03.436 --rc genhtml_legend=1 00:25:03.436 --rc geninfo_all_blocks=1 00:25:03.436 --rc geninfo_unexecuted_blocks=1 00:25:03.436 00:25:03.436 ' 00:25:03.436 16:30:29 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:03.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.436 --rc genhtml_branch_coverage=1 00:25:03.436 --rc genhtml_function_coverage=1 00:25:03.436 --rc genhtml_legend=1 00:25:03.436 --rc geninfo_all_blocks=1 00:25:03.436 --rc geninfo_unexecuted_blocks=1 00:25:03.436 00:25:03.436 ' 00:25:03.436 16:30:29 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:03.436 16:30:29 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.436 16:30:29 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.436 16:30:29 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.436 16:30:29 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.436 16:30:29 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.436 16:30:29 keyring_file -- paths/export.sh@5 -- # export PATH 00:25:03.436 16:30:29 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@51 -- # : 0 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.436 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.436 16:30:29 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:03.436 16:30:29 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:03.436 16:30:29 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:03.436 16:30:29 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:25:03.436 16:30:29 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:25:03.436 16:30:29 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:25:03.436 16:30:29 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:03.436 16:30:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:03.436 16:30:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:03.436 16:30:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:03.436 16:30:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:03.436 16:30:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:03.436 16:30:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.iKD1hHl0VS 00:25:03.436 16:30:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:03.436 16:30:29 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:25:03.707 16:30:29 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:03.707 16:30:29 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:25:03.707 16:30:29 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:25:03.707 16:30:29 keyring_file -- nvmf/common.sh@733 -- # python - 00:25:03.707 16:30:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.iKD1hHl0VS 00:25:03.707 16:30:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.iKD1hHl0VS 00:25:03.707 16:30:29 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.iKD1hHl0VS 00:25:03.707 16:30:29 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:25:03.707 16:30:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:03.707 16:30:29 keyring_file -- keyring/common.sh@17 -- # name=key1 00:25:03.707 16:30:29 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:03.707 16:30:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:03.707 16:30:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:03.707 16:30:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mz1PQbyvyz 00:25:03.707 16:30:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:03.707 16:30:29 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:03.707 16:30:29 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:25:03.707 16:30:29 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:03.707 16:30:29 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:25:03.707 16:30:29 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:25:03.707 16:30:29 keyring_file -- nvmf/common.sh@733 -- # python - 00:25:03.707 16:30:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mz1PQbyvyz 00:25:03.707 16:30:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mz1PQbyvyz 00:25:03.707 16:30:29 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.mz1PQbyvyz 00:25:03.707 16:30:29 keyring_file -- keyring/file.sh@30 -- # tgtpid=99741 00:25:03.707 16:30:29 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:03.707 16:30:29 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99741 00:25:03.707 16:30:29 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 99741 ']' 00:25:03.707 16:30:29 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.707 16:30:29 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:03.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.707 16:30:29 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.707 16:30:29 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:03.707 16:30:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:03.707 [2024-11-26 16:30:29.253885] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:25:03.708 [2024-11-26 16:30:29.253988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99741 ] 00:25:03.968 [2024-11-26 16:30:29.404536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.968 [2024-11-26 16:30:29.436597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.968 [2024-11-26 16:30:29.493563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:04.227 16:30:29 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:04.227 16:30:29 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:25:04.227 16:30:29 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:25:04.227 16:30:29 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.227 16:30:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:04.227 [2024-11-26 16:30:29.641630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.227 null0 00:25:04.227 [2024-11-26 16:30:29.673592] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:04.227 [2024-11-26 16:30:29.673794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:04.227 16:30:29 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.227 16:30:29 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:04.227 16:30:29 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:04.227 16:30:29 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:04.227 16:30:29 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:04.227 16:30:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.227 16:30:29 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:04.227 16:30:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:04.227 16:30:29 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:04.227 16:30:29 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.227 16:30:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:04.227 [2024-11-26 16:30:29.701584] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:25:04.227 request: 00:25:04.227 { 00:25:04.227 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:25:04.227 "secure_channel": false, 00:25:04.227 "listen_address": { 00:25:04.227 "trtype": "tcp", 00:25:04.227 "traddr": "127.0.0.1", 00:25:04.227 "trsvcid": "4420" 00:25:04.227 }, 00:25:04.227 "method": "nvmf_subsystem_add_listener", 00:25:04.227 "req_id": 1 00:25:04.227 } 00:25:04.227 Got JSON-RPC error response 00:25:04.227 response: 00:25:04.227 { 00:25:04.227 "code": -32602, 00:25:04.227 "message": "Invalid parameters" 00:25:04.227 } 00:25:04.227 16:30:29 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:04.227 16:30:29 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:04.228 16:30:29 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:04.228 16:30:29 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:04.228 16:30:29 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:04.228 16:30:29 keyring_file -- keyring/file.sh@47 -- # bperfpid=99750 00:25:04.228 16:30:29 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:25:04.228 16:30:29 keyring_file -- keyring/file.sh@49 -- # waitforlisten 99750 /var/tmp/bperf.sock 00:25:04.228 16:30:29 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 99750 ']' 00:25:04.228 16:30:29 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:04.228 16:30:29 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:04.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:04.228 16:30:29 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:04.228 16:30:29 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:04.228 16:30:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:04.228 [2024-11-26 16:30:29.763619] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:25:04.228 [2024-11-26 16:30:29.763727] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99750 ] 00:25:04.487 [2024-11-26 16:30:29.915195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.487 [2024-11-26 16:30:29.939203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.487 [2024-11-26 16:30:29.972006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:05.055 16:30:30 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.055 16:30:30 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:25:05.055 16:30:30 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iKD1hHl0VS 00:25:05.055 16:30:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iKD1hHl0VS 00:25:05.314 16:30:30 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mz1PQbyvyz 00:25:05.314 16:30:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mz1PQbyvyz 00:25:05.573 16:30:31 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:25:05.573 16:30:31 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:25:05.573 16:30:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:05.573 16:30:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:05.573 16:30:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:05.832 16:30:31 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.iKD1hHl0VS == \/\t\m\p\/\t\m\p\.\i\K\D\1\h\H\l\0\V\S ]] 00:25:05.832 16:30:31 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:25:05.832 16:30:31 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:25:05.832 16:30:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:05.832 16:30:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:05.832 16:30:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:06.091 16:30:31 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.mz1PQbyvyz == \/\t\m\p\/\t\m\p\.\m\z\1\P\Q\b\y\v\y\z ]] 00:25:06.091 16:30:31 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:25:06.091 16:30:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:06.091 16:30:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.091 16:30:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.091 16:30:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.091 16:30:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:06.350 16:30:31 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:25:06.350 16:30:31 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:25:06.350 16:30:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.350 16:30:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:06.350 16:30:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.350 16:30:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.350 16:30:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:06.609 16:30:32 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:25:06.609 16:30:32 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:06.609 16:30:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:06.868 [2024-11-26 16:30:32.370023] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:06.868 nvme0n1 00:25:06.868 16:30:32 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:25:06.868 16:30:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:06.868 16:30:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.868 16:30:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:06.868 16:30:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.868 16:30:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.127 16:30:32 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:25:07.127 16:30:32 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:25:07.127 16:30:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:07.127 16:30:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:07.127 16:30:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:07.127 16:30:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.127 16:30:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:07.695 16:30:33 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:25:07.695 16:30:33 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:07.695 Running I/O for 1 seconds... 00:25:08.633 13897.00 IOPS, 54.29 MiB/s 00:25:08.633 Latency(us) 00:25:08.633 [2024-11-26T16:30:34.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.633 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:08.633 nvme0n1 : 1.01 13944.48 54.47 0.00 0.00 9155.33 4408.79 19899.11 00:25:08.633 [2024-11-26T16:30:34.286Z] =================================================================================================================== 00:25:08.633 [2024-11-26T16:30:34.286Z] Total : 13944.48 54.47 0.00 0.00 9155.33 4408.79 19899.11 00:25:08.633 { 00:25:08.633 "results": [ 00:25:08.633 { 00:25:08.633 "job": "nvme0n1", 00:25:08.633 "core_mask": "0x2", 00:25:08.633 "workload": "randrw", 00:25:08.633 "percentage": 50, 00:25:08.633 "status": "finished", 00:25:08.633 "queue_depth": 128, 00:25:08.633 "io_size": 4096, 00:25:08.633 "runtime": 1.005846, 00:25:08.633 "iops": 13944.480566607612, 00:25:08.633 "mibps": 54.47062721331098, 00:25:08.633 "io_failed": 0, 00:25:08.633 "io_timeout": 0, 00:25:08.633 "avg_latency_us": 9155.334939009374, 00:25:08.633 "min_latency_us": 4408.785454545455, 00:25:08.633 "max_latency_us": 19899.112727272728 00:25:08.633 } 00:25:08.633 ], 00:25:08.633 "core_count": 1 00:25:08.633 } 00:25:08.633 16:30:34 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:08.633 16:30:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:08.892 16:30:34 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:25:08.892 16:30:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:08.892 16:30:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:08.892 16:30:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:08.892 16:30:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:08.892 16:30:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:09.151 16:30:34 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:25:09.151 16:30:34 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:25:09.151 16:30:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:09.151 16:30:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:09.151 16:30:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:09.151 16:30:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:09.151 16:30:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:09.410 16:30:34 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:25:09.410 16:30:34 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:09.410 16:30:34 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:09.410 16:30:34 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:09.410 16:30:34 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:09.410 16:30:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.410 16:30:34 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:09.410 16:30:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.410 16:30:34 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:09.410 16:30:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:09.668 [2024-11-26 16:30:35.237244] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd[2024-11-26 16:30:35.237249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd09f90 (107):k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:09.668 Transport endpoint is not connected 00:25:09.668 [2024-11-26 16:30:35.238241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd09f90 (9): Bad file descriptor 00:25:09.668 [2024-11-26 16:30:35.239239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:25:09.668 [2024-11-26 16:30:35.239272] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:09.668 [2024-11-26 16:30:35.239282] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:09.668 [2024-11-26 16:30:35.239291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:25:09.668 request: 00:25:09.668 { 00:25:09.668 "name": "nvme0", 00:25:09.668 "trtype": "tcp", 00:25:09.668 "traddr": "127.0.0.1", 00:25:09.668 "adrfam": "ipv4", 00:25:09.668 "trsvcid": "4420", 00:25:09.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:09.668 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:09.668 "prchk_reftag": false, 00:25:09.668 "prchk_guard": false, 00:25:09.668 "hdgst": false, 00:25:09.668 "ddgst": false, 00:25:09.668 "psk": "key1", 00:25:09.668 "allow_unrecognized_csi": false, 00:25:09.668 "method": "bdev_nvme_attach_controller", 00:25:09.668 "req_id": 1 00:25:09.668 } 00:25:09.668 Got JSON-RPC error response 00:25:09.668 response: 00:25:09.668 { 00:25:09.668 "code": -5, 00:25:09.668 "message": "Input/output error" 00:25:09.668 } 00:25:09.668 16:30:35 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:09.668 16:30:35 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:09.668 16:30:35 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:09.668 16:30:35 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:09.668 16:30:35 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:25:09.668 16:30:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:09.668 16:30:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:09.668 16:30:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:09.668 16:30:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:09.668 16:30:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:09.928 16:30:35 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:25:09.928 16:30:35 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:25:09.928 16:30:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:09.928 16:30:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:09.928 16:30:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:09.928 16:30:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:09.928 16:30:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:10.496 16:30:35 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:25:10.496 16:30:35 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:25:10.496 16:30:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:10.496 16:30:36 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:25:10.496 16:30:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:10.755 16:30:36 keyring_file -- keyring/file.sh@78 -- # jq length 00:25:10.755 16:30:36 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:25:10.755 16:30:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:11.013 16:30:36 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:25:11.013 16:30:36 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.iKD1hHl0VS 00:25:11.013 16:30:36 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.iKD1hHl0VS 00:25:11.013 16:30:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:11.013 16:30:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.iKD1hHl0VS 00:25:11.013 16:30:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:11.013 16:30:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:11.013 16:30:36 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:11.013 16:30:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:11.013 16:30:36 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iKD1hHl0VS 00:25:11.013 16:30:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iKD1hHl0VS 00:25:11.272 [2024-11-26 16:30:36.746712] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.iKD1hHl0VS': 0100660 00:25:11.272 [2024-11-26 16:30:36.746744] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:11.272 request: 00:25:11.272 { 00:25:11.272 "name": "key0", 00:25:11.272 "path": "/tmp/tmp.iKD1hHl0VS", 00:25:11.272 "method": "keyring_file_add_key", 00:25:11.272 "req_id": 1 00:25:11.272 } 00:25:11.272 Got JSON-RPC error response 00:25:11.272 response: 00:25:11.272 { 00:25:11.272 "code": -1, 00:25:11.272 "message": "Operation not permitted" 00:25:11.272 } 00:25:11.272 16:30:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:11.272 16:30:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:11.272 16:30:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:11.272 16:30:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:11.272 16:30:36 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.iKD1hHl0VS 00:25:11.272 16:30:36 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iKD1hHl0VS 00:25:11.272 16:30:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iKD1hHl0VS 00:25:11.531 16:30:37 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.iKD1hHl0VS 00:25:11.531 16:30:37 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:25:11.531 16:30:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:11.531 16:30:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:11.531 16:30:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:11.531 16:30:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:11.531 16:30:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:11.789 16:30:37 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:25:11.789 16:30:37 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:11.790 16:30:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:11.790 16:30:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:11.790 16:30:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:11.790 16:30:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:11.790 16:30:37 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:11.790 16:30:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:11.790 16:30:37 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:11.790 16:30:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:12.049 [2024-11-26 16:30:37.486950] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.iKD1hHl0VS': No such file or directory 00:25:12.049 [2024-11-26 16:30:37.486986] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:12.049 [2024-11-26 16:30:37.487004] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:12.049 [2024-11-26 16:30:37.487012] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:25:12.049 [2024-11-26 16:30:37.487019] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:12.049 [2024-11-26 16:30:37.487027] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:12.049 request: 00:25:12.049 { 00:25:12.049 "name": "nvme0", 00:25:12.049 "trtype": "tcp", 00:25:12.049 "traddr": "127.0.0.1", 00:25:12.049 "adrfam": "ipv4", 00:25:12.049 "trsvcid": "4420", 00:25:12.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:12.049 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:12.049 "prchk_reftag": false, 00:25:12.049 "prchk_guard": false, 00:25:12.049 "hdgst": false, 00:25:12.049 "ddgst": false, 00:25:12.049 "psk": "key0", 00:25:12.049 "allow_unrecognized_csi": false, 00:25:12.049 "method": "bdev_nvme_attach_controller", 00:25:12.049 "req_id": 1 00:25:12.049 } 00:25:12.049 Got JSON-RPC error response 00:25:12.049 response: 00:25:12.049 { 00:25:12.049 "code": -19, 00:25:12.049 "message": "No such device" 00:25:12.049 } 00:25:12.049 16:30:37 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:12.049 16:30:37 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:12.049 16:30:37 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:12.049 16:30:37 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:12.049 16:30:37 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:25:12.049 16:30:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:12.309 16:30:37 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:12.309 16:30:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:12.309 16:30:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:12.309 16:30:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:12.309 16:30:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:12.309 16:30:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:12.309 16:30:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CoGmeTUCua 00:25:12.309 16:30:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:12.309 16:30:37 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:12.309 16:30:37 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:25:12.309 16:30:37 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:12.309 16:30:37 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:25:12.309 16:30:37 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:25:12.309 16:30:37 keyring_file -- nvmf/common.sh@733 -- # python - 00:25:12.309 16:30:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CoGmeTUCua 00:25:12.309 16:30:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CoGmeTUCua 00:25:12.309 16:30:37 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.CoGmeTUCua 00:25:12.309 16:30:37 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CoGmeTUCua 00:25:12.309 16:30:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CoGmeTUCua 00:25:12.568 16:30:38 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:12.568 16:30:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:12.827 nvme0n1 00:25:12.827 16:30:38 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:25:12.827 16:30:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:12.827 16:30:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:12.827 16:30:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:12.827 16:30:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:12.827 16:30:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:13.085 16:30:38 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:25:13.085 16:30:38 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:25:13.085 16:30:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:13.343 16:30:38 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:25:13.343 16:30:38 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:25:13.343 16:30:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:13.343 16:30:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:13.343 16:30:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:13.603 16:30:39 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:25:13.603 16:30:39 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:25:13.603 16:30:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:13.603 16:30:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:13.603 16:30:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:13.603 16:30:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:13.603 16:30:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:13.862 16:30:39 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:25:13.862 16:30:39 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:13.862 16:30:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:14.120 16:30:39 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:25:14.120 16:30:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:14.120 16:30:39 keyring_file -- keyring/file.sh@105 -- # jq length 00:25:14.380 16:30:39 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:25:14.380 16:30:39 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CoGmeTUCua 00:25:14.380 16:30:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CoGmeTUCua 00:25:14.638 16:30:40 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mz1PQbyvyz 00:25:14.638 16:30:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mz1PQbyvyz 00:25:14.897 16:30:40 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:14.897 16:30:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:15.156 nvme0n1 00:25:15.156 16:30:40 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:25:15.156 16:30:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:15.416 16:30:40 keyring_file -- keyring/file.sh@113 -- # config='{ 00:25:15.416 "subsystems": [ 00:25:15.416 { 00:25:15.416 "subsystem": "keyring", 00:25:15.416 "config": [ 00:25:15.416 { 00:25:15.416 "method": "keyring_file_add_key", 00:25:15.416 "params": { 00:25:15.416 "name": "key0", 00:25:15.416 "path": "/tmp/tmp.CoGmeTUCua" 00:25:15.416 } 00:25:15.416 }, 00:25:15.416 { 00:25:15.416 "method": "keyring_file_add_key", 00:25:15.416 "params": { 00:25:15.416 "name": "key1", 00:25:15.416 "path": "/tmp/tmp.mz1PQbyvyz" 00:25:15.416 } 00:25:15.416 } 00:25:15.416 ] 00:25:15.416 }, 00:25:15.416 { 00:25:15.416 "subsystem": "iobuf", 00:25:15.416 "config": [ 00:25:15.416 { 00:25:15.416 "method": "iobuf_set_options", 00:25:15.416 "params": { 00:25:15.416 "small_pool_count": 8192, 00:25:15.416 "large_pool_count": 1024, 00:25:15.416 "small_bufsize": 8192, 00:25:15.416 "large_bufsize": 135168, 00:25:15.416 "enable_numa": false 00:25:15.416 } 00:25:15.416 } 00:25:15.416 ] 00:25:15.416 }, 00:25:15.416 { 00:25:15.416 "subsystem": "sock", 00:25:15.416 "config": [ 00:25:15.416 { 00:25:15.416 "method": "sock_set_default_impl", 00:25:15.416 "params": { 00:25:15.416 "impl_name": "uring" 00:25:15.416 } 00:25:15.416 }, 00:25:15.416 { 00:25:15.416 "method": "sock_impl_set_options", 00:25:15.416 "params": { 00:25:15.416 "impl_name": "ssl", 00:25:15.416 "recv_buf_size": 4096, 00:25:15.416 "send_buf_size": 4096, 00:25:15.416 "enable_recv_pipe": true, 00:25:15.416 "enable_quickack": false, 00:25:15.416 "enable_placement_id": 0, 00:25:15.416 "enable_zerocopy_send_server": true, 00:25:15.416 "enable_zerocopy_send_client": false, 00:25:15.416 "zerocopy_threshold": 0, 00:25:15.416 "tls_version": 0, 00:25:15.416 "enable_ktls": false 00:25:15.416 } 00:25:15.416 }, 00:25:15.416 { 00:25:15.416 "method": "sock_impl_set_options", 00:25:15.416 "params": { 00:25:15.416 "impl_name": "posix", 00:25:15.416 "recv_buf_size": 2097152, 00:25:15.416 "send_buf_size": 2097152, 00:25:15.416 "enable_recv_pipe": true, 00:25:15.416 "enable_quickack": false, 00:25:15.416 "enable_placement_id": 0, 00:25:15.416 "enable_zerocopy_send_server": true, 00:25:15.416 "enable_zerocopy_send_client": false, 00:25:15.416 "zerocopy_threshold": 0, 00:25:15.416 "tls_version": 0, 00:25:15.416 "enable_ktls": false 00:25:15.416 } 00:25:15.416 }, 00:25:15.416 { 00:25:15.416 "method": "sock_impl_set_options", 00:25:15.416 "params": { 00:25:15.416 "impl_name": "uring", 00:25:15.416 "recv_buf_size": 2097152, 00:25:15.416 "send_buf_size": 2097152, 00:25:15.416 "enable_recv_pipe": true, 00:25:15.416 "enable_quickack": false, 00:25:15.416 "enable_placement_id": 0, 00:25:15.416 "enable_zerocopy_send_server": false, 00:25:15.416 "enable_zerocopy_send_client": false, 00:25:15.416 "zerocopy_threshold": 0, 00:25:15.416 "tls_version": 0, 00:25:15.416 "enable_ktls": false 00:25:15.416 } 00:25:15.416 } 00:25:15.416 ] 00:25:15.416 }, 00:25:15.416 { 00:25:15.416 "subsystem": "vmd", 00:25:15.416 "config": [] 00:25:15.416 }, 00:25:15.416 { 00:25:15.416 "subsystem": "accel", 00:25:15.416 "config": [ 00:25:15.416 { 00:25:15.416 "method": "accel_set_options", 00:25:15.416 "params": { 00:25:15.416 "small_cache_size": 128, 00:25:15.416 "large_cache_size": 16, 00:25:15.416 "task_count": 2048, 00:25:15.416 "sequence_count": 2048, 00:25:15.416 "buf_count": 2048 00:25:15.416 } 00:25:15.416 } 00:25:15.416 ] 00:25:15.416 }, 00:25:15.416 { 00:25:15.416 "subsystem": "bdev", 00:25:15.416 "config": [ 00:25:15.416 { 00:25:15.416 "method": "bdev_set_options", 00:25:15.416 "params": { 00:25:15.416 "bdev_io_pool_size": 65535, 00:25:15.416 "bdev_io_cache_size": 256, 00:25:15.416 "bdev_auto_examine": true, 00:25:15.416 "iobuf_small_cache_size": 128, 00:25:15.416 "iobuf_large_cache_size": 16 00:25:15.416 } 00:25:15.416 }, 00:25:15.416 { 00:25:15.416 "method": "bdev_raid_set_options", 00:25:15.416 "params": { 00:25:15.416 "process_window_size_kb": 1024, 00:25:15.416 "process_max_bandwidth_mb_sec": 0 00:25:15.416 } 00:25:15.416 }, 00:25:15.416 { 00:25:15.416 "method": "bdev_iscsi_set_options", 00:25:15.416 "params": { 00:25:15.416 "timeout_sec": 30 00:25:15.416 } 00:25:15.416 }, 00:25:15.416 { 00:25:15.416 "method": "bdev_nvme_set_options", 00:25:15.416 "params": { 00:25:15.416 "action_on_timeout": "none", 00:25:15.416 "timeout_us": 0, 00:25:15.416 "timeout_admin_us": 0, 00:25:15.416 "keep_alive_timeout_ms": 10000, 00:25:15.416 "arbitration_burst": 0, 00:25:15.416 "low_priority_weight": 0, 00:25:15.416 "medium_priority_weight": 0, 00:25:15.416 "high_priority_weight": 0, 00:25:15.416 "nvme_adminq_poll_period_us": 10000, 00:25:15.416 "nvme_ioq_poll_period_us": 0, 00:25:15.416 "io_queue_requests": 512, 00:25:15.416 "delay_cmd_submit": true, 00:25:15.416 "transport_retry_count": 4, 00:25:15.416 "bdev_retry_count": 3, 00:25:15.416 "transport_ack_timeout": 0, 00:25:15.416 "ctrlr_loss_timeout_sec": 0, 00:25:15.416 "reconnect_delay_sec": 0, 00:25:15.416 "fast_io_fail_timeout_sec": 0, 00:25:15.416 "disable_auto_failback": false, 00:25:15.416 "generate_uuids": false, 00:25:15.416 "transport_tos": 0, 00:25:15.416 "nvme_error_stat": false, 00:25:15.416 "rdma_srq_size": 0, 00:25:15.416 "io_path_stat": false, 00:25:15.416 "allow_accel_sequence": false, 00:25:15.416 "rdma_max_cq_size": 0, 00:25:15.416 "rdma_cm_event_timeout_ms": 0, 00:25:15.416 "dhchap_digests": [ 00:25:15.416 "sha256", 00:25:15.416 "sha384", 00:25:15.416 "sha512" 00:25:15.416 ], 00:25:15.416 "dhchap_dhgroups": [ 00:25:15.416 "null", 00:25:15.416 "ffdhe2048", 00:25:15.416 "ffdhe3072", 00:25:15.416 "ffdhe4096", 00:25:15.416 "ffdhe6144", 00:25:15.416 "ffdhe8192" 00:25:15.416 ] 00:25:15.416 } 00:25:15.416 }, 00:25:15.416 { 00:25:15.416 "method": "bdev_nvme_attach_controller", 00:25:15.416 "params": { 00:25:15.416 "name": "nvme0", 00:25:15.416 "trtype": "TCP", 00:25:15.416 "adrfam": "IPv4", 00:25:15.416 "traddr": "127.0.0.1", 00:25:15.416 "trsvcid": "4420", 00:25:15.416 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:15.416 "prchk_reftag": false, 00:25:15.416 "prchk_guard": false, 00:25:15.416 "ctrlr_loss_timeout_sec": 0, 00:25:15.416 "reconnect_delay_sec": 0, 00:25:15.416 "fast_io_fail_timeout_sec": 0, 00:25:15.416 "psk": "key0", 00:25:15.416 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:15.416 "hdgst": false, 00:25:15.416 "ddgst": false, 00:25:15.416 "multipath": "multipath" 00:25:15.416 } 00:25:15.416 }, 00:25:15.416 { 00:25:15.416 "method": "bdev_nvme_set_hotplug", 00:25:15.416 "params": { 00:25:15.416 "period_us": 100000, 00:25:15.416 "enable": false 00:25:15.416 } 00:25:15.416 }, 00:25:15.416 { 00:25:15.416 "method": "bdev_wait_for_examine" 00:25:15.416 } 00:25:15.416 ] 00:25:15.416 }, 00:25:15.416 { 00:25:15.416 "subsystem": "nbd", 00:25:15.416 "config": [] 00:25:15.416 } 00:25:15.416 ] 00:25:15.416 }' 00:25:15.416 16:30:40 keyring_file -- keyring/file.sh@115 -- # killprocess 99750 00:25:15.416 16:30:40 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 99750 ']' 00:25:15.416 16:30:40 keyring_file -- common/autotest_common.sh@958 -- # kill -0 99750 00:25:15.416 16:30:40 keyring_file -- common/autotest_common.sh@959 -- # uname 00:25:15.417 16:30:40 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:15.417 16:30:40 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99750 00:25:15.417 killing process with pid 99750 00:25:15.417 Received shutdown signal, test time was about 1.000000 seconds 00:25:15.417 00:25:15.417 Latency(us) 00:25:15.417 [2024-11-26T16:30:41.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.417 [2024-11-26T16:30:41.070Z] =================================================================================================================== 00:25:15.417 [2024-11-26T16:30:41.070Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.417 16:30:40 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:15.417 16:30:40 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:15.417 16:30:40 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99750' 00:25:15.417 16:30:40 keyring_file -- common/autotest_common.sh@973 -- # kill 99750 00:25:15.417 16:30:40 keyring_file -- common/autotest_common.sh@978 -- # wait 99750 00:25:15.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:15.676 16:30:41 keyring_file -- keyring/file.sh@118 -- # bperfpid=99996 00:25:15.676 16:30:41 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:15.676 16:30:41 keyring_file -- keyring/file.sh@120 -- # waitforlisten 99996 /var/tmp/bperf.sock 00:25:15.676 16:30:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 99996 ']' 00:25:15.676 16:30:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:15.676 16:30:41 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:25:15.676 "subsystems": [ 00:25:15.676 { 00:25:15.676 "subsystem": "keyring", 00:25:15.676 "config": [ 00:25:15.676 { 00:25:15.676 "method": "keyring_file_add_key", 00:25:15.676 "params": { 00:25:15.676 "name": "key0", 00:25:15.676 "path": "/tmp/tmp.CoGmeTUCua" 00:25:15.676 } 00:25:15.676 }, 00:25:15.676 { 00:25:15.676 "method": "keyring_file_add_key", 00:25:15.676 "params": { 00:25:15.676 "name": "key1", 00:25:15.676 "path": "/tmp/tmp.mz1PQbyvyz" 00:25:15.676 } 00:25:15.676 } 00:25:15.676 ] 00:25:15.676 }, 00:25:15.676 { 00:25:15.676 "subsystem": "iobuf", 00:25:15.676 "config": [ 00:25:15.676 { 00:25:15.676 "method": "iobuf_set_options", 00:25:15.676 "params": { 00:25:15.676 "small_pool_count": 8192, 00:25:15.676 "large_pool_count": 1024, 00:25:15.676 "small_bufsize": 8192, 00:25:15.676 "large_bufsize": 135168, 00:25:15.676 "enable_numa": false 00:25:15.676 } 00:25:15.676 } 00:25:15.676 ] 00:25:15.676 }, 00:25:15.676 { 00:25:15.676 "subsystem": "sock", 00:25:15.676 "config": [ 00:25:15.676 { 00:25:15.676 "method": "sock_set_default_impl", 00:25:15.676 "params": { 00:25:15.676 "impl_name": "uring" 00:25:15.676 } 00:25:15.676 }, 00:25:15.676 { 00:25:15.676 "method": "sock_impl_set_options", 00:25:15.676 "params": { 00:25:15.676 "impl_name": "ssl", 00:25:15.676 "recv_buf_size": 4096, 00:25:15.676 "send_buf_size": 4096, 00:25:15.676 "enable_recv_pipe": true, 00:25:15.676 "enable_quickack": false, 00:25:15.676 "enable_placement_id": 0, 00:25:15.676 "enable_zerocopy_send_server": true, 00:25:15.676 "enable_zerocopy_send_client": false, 00:25:15.676 "zerocopy_threshold": 0, 00:25:15.676 "tls_version": 0, 00:25:15.676 "enable_ktls": false 00:25:15.676 } 00:25:15.676 }, 00:25:15.676 { 00:25:15.676 "method": "sock_impl_set_options", 00:25:15.676 "params": { 00:25:15.676 "impl_name": "posix", 00:25:15.676 "recv_buf_size": 2097152, 00:25:15.676 "send_buf_size": 2097152, 00:25:15.676 "enable_recv_pipe": true, 00:25:15.676 "enable_quickack": false, 00:25:15.676 "enable_placement_id": 0, 00:25:15.676 "enable_zerocopy_send_server": true, 00:25:15.676 "enable_zerocopy_send_client": false, 00:25:15.676 "zerocopy_threshold": 0, 00:25:15.676 "tls_version": 0, 00:25:15.676 "enable_ktls": false 00:25:15.676 } 00:25:15.676 }, 00:25:15.676 { 00:25:15.676 "method": "sock_impl_set_options", 00:25:15.676 "params": { 00:25:15.676 "impl_name": "uring", 00:25:15.676 "recv_buf_size": 2097152, 00:25:15.676 "send_buf_size": 2097152, 00:25:15.676 "enable_recv_pipe": true, 00:25:15.676 "enable_quickack": false, 00:25:15.676 "enable_placement_id": 0, 00:25:15.676 "enable_zerocopy_send_server": false, 00:25:15.676 "enable_zerocopy_send_client": false, 00:25:15.676 "zerocopy_threshold": 0, 00:25:15.676 "tls_version": 0, 00:25:15.676 "enable_ktls": false 00:25:15.676 } 00:25:15.676 } 00:25:15.676 ] 00:25:15.676 }, 00:25:15.676 { 00:25:15.676 "subsystem": "vmd", 00:25:15.676 "config": [] 00:25:15.676 }, 00:25:15.676 { 00:25:15.676 "subsystem": "accel", 00:25:15.676 "config": [ 00:25:15.676 { 00:25:15.676 "method": "accel_set_options", 00:25:15.676 "params": { 00:25:15.676 "small_cache_size": 128, 00:25:15.676 "large_cache_size": 16, 00:25:15.676 "task_count": 2048, 00:25:15.676 "sequence_count": 2048, 00:25:15.676 "buf_count": 2048 00:25:15.676 } 00:25:15.676 } 00:25:15.676 ] 00:25:15.676 }, 00:25:15.676 { 00:25:15.676 "subsystem": "bdev", 00:25:15.676 "config": [ 00:25:15.676 { 00:25:15.677 "method": "bdev_set_options", 00:25:15.677 "params": { 00:25:15.677 "bdev_io_pool_size": 65535, 00:25:15.677 "bdev_io_cache_size": 256, 00:25:15.677 "bdev_auto_examine": true, 00:25:15.677 "iobuf_small_cache_size": 128, 00:25:15.677 "iobuf_large_cache_size": 16 00:25:15.677 } 00:25:15.677 }, 00:25:15.677 { 00:25:15.677 "method": "bdev_raid_set_options", 00:25:15.677 "params": { 00:25:15.677 "process_window_size_kb": 1024, 00:25:15.677 "process_max_bandwidth_mb_sec": 0 00:25:15.677 } 00:25:15.677 }, 00:25:15.677 { 00:25:15.677 "method": "bdev_iscsi_set_options", 00:25:15.677 "params": { 00:25:15.677 "timeout_sec": 30 00:25:15.677 } 00:25:15.677 }, 00:25:15.677 { 00:25:15.677 "method": "bdev_nvme_set_options", 00:25:15.677 "params": { 00:25:15.677 "action_on_timeout": "none", 00:25:15.677 "timeout_us": 0, 00:25:15.677 "timeout_admin_us": 0, 00:25:15.677 "keep_alive_timeout_ms": 10000, 00:25:15.677 "arbitration_burst": 0, 00:25:15.677 "low_priority_weight": 0, 00:25:15.677 "medium_priority_weight": 0, 00:25:15.677 "high_priority_weight": 0, 00:25:15.677 "nvme_adminq_poll_period_us": 10000, 00:25:15.677 "nvme_ioq_poll_period_us": 0, 00:25:15.677 "io_queue_requests": 512, 00:25:15.677 "delay_cmd_submit": true, 00:25:15.677 "transport_retry_count": 4, 00:25:15.677 "bdev_retry_count": 3, 00:25:15.677 "transport_ack_timeout": 0, 00:25:15.677 "ctrlr_loss_timeout_sec": 0, 00:25:15.677 "reconnect_delay_sec": 0, 00:25:15.677 "fast_io_fail_timeout_sec": 0, 00:25:15.677 "disable_auto_failback": false, 00:25:15.677 "generate_uuids": false, 00:25:15.677 "transport_tos": 0, 00:25:15.677 "nvme_error_stat": false, 00:25:15.677 "rdma_srq_size": 0, 00:25:15.677 "io_path_stat": false, 00:25:15.677 "allow_accel_sequence": false, 00:25:15.677 "rdma_max_cq_size": 0, 00:25:15.677 "rdma_cm_event_timeout_ms": 0, 00:25:15.677 "dhchap_digests": [ 00:25:15.677 "sha256", 00:25:15.677 "sha384", 00:25:15.677 "sha512" 00:25:15.677 ], 00:25:15.677 "dhchap_dhgroups": [ 00:25:15.677 "null", 00:25:15.677 "ffdhe2048", 00:25:15.677 "ffdhe3072", 00:25:15.677 "ffdhe4096", 00:25:15.677 "ffdhe6144", 00:25:15.677 "ffdhe8192" 00:25:15.677 ] 00:25:15.677 } 00:25:15.677 }, 00:25:15.677 { 00:25:15.677 "method": "bdev_nvme_attach_controller", 00:25:15.677 "params": { 00:25:15.677 "name": "nvme0", 00:25:15.677 "trtype": "TCP", 00:25:15.677 "adrfam": "IPv4", 00:25:15.677 "traddr": "127.0.0.1", 00:25:15.677 "trsvcid": "4420", 00:25:15.677 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:15.677 "prchk_reftag": false, 00:25:15.677 "prchk_guard": false, 00:25:15.677 "ctrlr_loss_timeout_sec": 0, 00:25:15.677 "reconnect_delay_sec": 0, 00:25:15.677 "fast_io_fail_timeout_sec": 0, 00:25:15.677 "psk": "key0", 00:25:15.677 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:15.677 "hdgst": false, 00:25:15.677 "ddgst": false, 00:25:15.677 "multipath": "multipath" 00:25:15.677 } 00:25:15.677 }, 00:25:15.677 { 00:25:15.677 "method": "bdev_nvme_set_hotplug", 00:25:15.677 "params": { 00:25:15.677 "period_us": 100000, 00:25:15.677 "enable": false 00:25:15.677 } 00:25:15.677 }, 00:25:15.677 { 00:25:15.677 "method": "bdev_wait_for_examine" 00:25:15.677 } 00:25:15.677 ] 00:25:15.677 }, 00:25:15.677 { 00:25:15.677 "subsystem": "nbd", 00:25:15.677 "config": [] 00:25:15.677 } 00:25:15.677 ] 00:25:15.677 }' 00:25:15.677 16:30:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.677 16:30:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:15.677 16:30:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.677 16:30:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:15.677 [2024-11-26 16:30:41.120354] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:25:15.677 [2024-11-26 16:30:41.120575] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99996 ] 00:25:15.677 [2024-11-26 16:30:41.258805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.677 [2024-11-26 16:30:41.278405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.936 [2024-11-26 16:30:41.386412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:15.936 [2024-11-26 16:30:41.422336] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:16.508 16:30:42 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.508 16:30:42 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:25:16.508 16:30:42 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:25:16.508 16:30:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:16.508 16:30:42 keyring_file -- keyring/file.sh@121 -- # jq length 00:25:16.767 16:30:42 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:16.767 16:30:42 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:25:16.767 16:30:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:16.767 16:30:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:16.767 16:30:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:16.767 16:30:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:16.767 16:30:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:17.026 16:30:42 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:25:17.026 16:30:42 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:25:17.026 16:30:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:17.026 16:30:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:17.026 16:30:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:17.026 16:30:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:17.026 16:30:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:17.303 16:30:42 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:25:17.303 16:30:42 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:25:17.303 16:30:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:17.303 16:30:42 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:25:17.564 16:30:43 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:25:17.564 16:30:43 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:17.564 16:30:43 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.CoGmeTUCua /tmp/tmp.mz1PQbyvyz 00:25:17.564 16:30:43 keyring_file -- keyring/file.sh@20 -- # killprocess 99996 00:25:17.564 16:30:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 99996 ']' 00:25:17.564 16:30:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 99996 00:25:17.564 16:30:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:25:17.564 16:30:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.564 16:30:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99996 00:25:17.564 killing process with pid 99996 00:25:17.564 Received shutdown signal, test time was about 1.000000 seconds 00:25:17.564 00:25:17.564 Latency(us) 00:25:17.564 [2024-11-26T16:30:43.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.564 [2024-11-26T16:30:43.217Z] =================================================================================================================== 00:25:17.564 [2024-11-26T16:30:43.217Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:17.564 16:30:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:17.564 16:30:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:17.564 16:30:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99996' 00:25:17.564 16:30:43 keyring_file -- common/autotest_common.sh@973 -- # kill 99996 00:25:17.564 16:30:43 keyring_file -- common/autotest_common.sh@978 -- # wait 99996 00:25:17.823 16:30:43 keyring_file -- keyring/file.sh@21 -- # killprocess 99741 00:25:17.823 16:30:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 99741 ']' 00:25:17.823 16:30:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 99741 00:25:17.823 16:30:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:25:17.823 16:30:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.823 16:30:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99741 00:25:17.823 killing process with pid 99741 00:25:17.823 16:30:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:17.823 16:30:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:17.823 16:30:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99741' 00:25:17.823 16:30:43 keyring_file -- common/autotest_common.sh@973 -- # kill 99741 00:25:17.823 16:30:43 keyring_file -- common/autotest_common.sh@978 -- # wait 99741 00:25:18.082 00:25:18.082 real 0m14.673s 00:25:18.082 user 0m37.853s 00:25:18.082 sys 0m2.647s 00:25:18.082 16:30:43 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:18.082 ************************************ 00:25:18.082 END TEST keyring_file 00:25:18.082 ************************************ 00:25:18.082 16:30:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:18.082 16:30:43 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:25:18.082 16:30:43 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:18.082 16:30:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:18.082 16:30:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:18.082 16:30:43 -- common/autotest_common.sh@10 -- # set +x 00:25:18.082 ************************************ 00:25:18.082 START TEST keyring_linux 00:25:18.082 ************************************ 00:25:18.082 16:30:43 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:18.082 Joined session keyring: 1037409781 00:25:18.082 * Looking for test storage... 00:25:18.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:18.082 16:30:43 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:18.082 16:30:43 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:25:18.082 16:30:43 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:18.341 16:30:43 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:18.341 16:30:43 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:18.341 16:30:43 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:18.341 16:30:43 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:18.341 16:30:43 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:25:18.341 16:30:43 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:25:18.341 16:30:43 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:25:18.341 16:30:43 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:25:18.341 16:30:43 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:25:18.341 16:30:43 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:25:18.341 16:30:43 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:25:18.341 16:30:43 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@345 -- # : 1 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@368 -- # return 0 00:25:18.342 16:30:43 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:18.342 16:30:43 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:18.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.342 --rc genhtml_branch_coverage=1 00:25:18.342 --rc genhtml_function_coverage=1 00:25:18.342 --rc genhtml_legend=1 00:25:18.342 --rc geninfo_all_blocks=1 00:25:18.342 --rc geninfo_unexecuted_blocks=1 00:25:18.342 00:25:18.342 ' 00:25:18.342 16:30:43 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:18.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.342 --rc genhtml_branch_coverage=1 00:25:18.342 --rc genhtml_function_coverage=1 00:25:18.342 --rc genhtml_legend=1 00:25:18.342 --rc geninfo_all_blocks=1 00:25:18.342 --rc geninfo_unexecuted_blocks=1 00:25:18.342 00:25:18.342 ' 00:25:18.342 16:30:43 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:18.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.342 --rc genhtml_branch_coverage=1 00:25:18.342 --rc genhtml_function_coverage=1 00:25:18.342 --rc genhtml_legend=1 00:25:18.342 --rc geninfo_all_blocks=1 00:25:18.342 --rc geninfo_unexecuted_blocks=1 00:25:18.342 00:25:18.342 ' 00:25:18.342 16:30:43 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:18.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.342 --rc genhtml_branch_coverage=1 00:25:18.342 --rc genhtml_function_coverage=1 00:25:18.342 --rc genhtml_legend=1 00:25:18.342 --rc geninfo_all_blocks=1 00:25:18.342 --rc geninfo_unexecuted_blocks=1 00:25:18.342 00:25:18.342 ' 00:25:18.342 16:30:43 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=088cee68-288e-4cf6-92d0-e6cd1eb4210a 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.342 16:30:43 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.342 16:30:43 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.342 16:30:43 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.342 16:30:43 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.342 16:30:43 keyring_linux -- paths/export.sh@5 -- # export PATH 00:25:18.342 16:30:43 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:18.342 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:18.342 16:30:43 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:18.342 16:30:43 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:18.342 16:30:43 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:25:18.342 16:30:43 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:25:18.342 16:30:43 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:25:18.342 16:30:43 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@733 -- # python - 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:25:18.342 /tmp/:spdk-test:key0 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:25:18.342 16:30:43 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:25:18.342 16:30:43 keyring_linux -- nvmf/common.sh@733 -- # python - 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:25:18.342 16:30:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:25:18.342 /tmp/:spdk-test:key1 00:25:18.342 16:30:43 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100118 00:25:18.342 16:30:43 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:18.342 16:30:43 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100118 00:25:18.343 16:30:43 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 100118 ']' 00:25:18.343 16:30:43 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.343 16:30:43 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.343 16:30:43 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.343 16:30:43 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.343 16:30:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:18.602 [2024-11-26 16:30:43.989896] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:25:18.602 [2024-11-26 16:30:43.989992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100118 ] 00:25:18.602 [2024-11-26 16:30:44.133686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.602 [2024-11-26 16:30:44.152743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.602 [2024-11-26 16:30:44.186623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:18.862 16:30:44 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.862 16:30:44 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:25:18.862 16:30:44 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:25:18.862 16:30:44 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.862 16:30:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:18.862 [2024-11-26 16:30:44.302027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.862 null0 00:25:18.862 [2024-11-26 16:30:44.334012] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:18.862 [2024-11-26 16:30:44.334160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:18.862 16:30:44 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.862 16:30:44 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:25:18.862 782355043 00:25:18.862 16:30:44 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:25:18.862 817150000 00:25:18.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:18.862 16:30:44 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100129 00:25:18.862 16:30:44 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:25:18.862 16:30:44 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100129 /var/tmp/bperf.sock 00:25:18.862 16:30:44 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 100129 ']' 00:25:18.862 16:30:44 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:18.862 16:30:44 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.862 16:30:44 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:18.862 16:30:44 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.862 16:30:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:18.862 [2024-11-26 16:30:44.417447] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 22.11.4 initialization... 00:25:18.862 [2024-11-26 16:30:44.417767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100129 ] 00:25:19.121 [2024-11-26 16:30:44.561600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.121 [2024-11-26 16:30:44.579887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.689 16:30:45 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:19.689 16:30:45 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:25:19.689 16:30:45 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:25:19.689 16:30:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:25:19.948 16:30:45 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:25:19.948 16:30:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:20.207 [2024-11-26 16:30:45.809827] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:20.207 16:30:45 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:20.207 16:30:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:20.467 [2024-11-26 16:30:46.092304] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:20.726 nvme0n1 00:25:20.726 16:30:46 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:25:20.726 16:30:46 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:25:20.726 16:30:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:20.726 16:30:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:20.726 16:30:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:20.726 16:30:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:20.985 16:30:46 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:20.985 16:30:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:20.985 16:30:46 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:20.985 16:30:46 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:20.985 16:30:46 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:20.985 16:30:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:20.985 16:30:46 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:21.244 16:30:46 keyring_linux -- keyring/linux.sh@25 -- # sn=782355043 00:25:21.244 16:30:46 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:21.244 16:30:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:21.244 16:30:46 keyring_linux -- keyring/linux.sh@26 -- # [[ 782355043 == \7\8\2\3\5\5\0\4\3 ]] 00:25:21.244 16:30:46 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 782355043 00:25:21.244 16:30:46 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:21.244 16:30:46 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:21.244 Running I/O for 1 seconds... 00:25:22.181 13782.00 IOPS, 53.84 MiB/s 00:25:22.181 Latency(us) 00:25:22.181 [2024-11-26T16:30:47.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.181 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:22.181 nvme0n1 : 1.01 13784.81 53.85 0.00 0.00 9240.49 4557.73 12571.00 00:25:22.181 [2024-11-26T16:30:47.834Z] =================================================================================================================== 00:25:22.181 [2024-11-26T16:30:47.834Z] Total : 13784.81 53.85 0.00 0.00 9240.49 4557.73 12571.00 00:25:22.181 { 00:25:22.181 "results": [ 00:25:22.181 { 00:25:22.181 "job": "nvme0n1", 00:25:22.181 "core_mask": "0x2", 00:25:22.181 "workload": "randread", 00:25:22.181 "status": "finished", 00:25:22.181 "queue_depth": 128, 00:25:22.181 "io_size": 4096, 00:25:22.181 "runtime": 1.009082, 00:25:22.181 "iops": 13784.806388380726, 00:25:22.181 "mibps": 53.84689995461221, 00:25:22.181 "io_failed": 0, 00:25:22.181 "io_timeout": 0, 00:25:22.181 "avg_latency_us": 9240.486054506242, 00:25:22.181 "min_latency_us": 4557.730909090909, 00:25:22.181 "max_latency_us": 12570.996363636363 00:25:22.181 } 00:25:22.181 ], 00:25:22.181 "core_count": 1 00:25:22.181 } 00:25:22.441 16:30:47 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:22.441 16:30:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:22.700 16:30:48 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:22.700 16:30:48 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:22.700 16:30:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:22.700 16:30:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:22.700 16:30:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:22.700 16:30:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:22.959 16:30:48 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:22.959 16:30:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:22.959 16:30:48 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:22.959 16:30:48 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:22.959 16:30:48 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:25:22.959 16:30:48 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:22.959 16:30:48 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:22.959 16:30:48 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:22.959 16:30:48 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:22.959 16:30:48 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:22.959 16:30:48 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:22.959 16:30:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:23.220 [2024-11-26 16:30:48.667049] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:23.220 [2024-11-26 16:30:48.667551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc2d60 (107): Transport endpoint is not connected 00:25:23.220 [2024-11-26 16:30:48.668543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc2d60 (9): Bad file descriptor 00:25:23.220 [2024-11-26 16:30:48.669540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:25:23.220 [2024-11-26 16:30:48.669574] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:23.220 [2024-11-26 16:30:48.669584] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:23.220 [2024-11-26 16:30:48.669594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:25:23.220 request: 00:25:23.220 { 00:25:23.220 "name": "nvme0", 00:25:23.220 "trtype": "tcp", 00:25:23.220 "traddr": "127.0.0.1", 00:25:23.220 "adrfam": "ipv4", 00:25:23.221 "trsvcid": "4420", 00:25:23.221 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:23.221 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:23.221 "prchk_reftag": false, 00:25:23.221 "prchk_guard": false, 00:25:23.221 "hdgst": false, 00:25:23.221 "ddgst": false, 00:25:23.221 "psk": ":spdk-test:key1", 00:25:23.221 "allow_unrecognized_csi": false, 00:25:23.221 "method": "bdev_nvme_attach_controller", 00:25:23.221 "req_id": 1 00:25:23.221 } 00:25:23.221 Got JSON-RPC error response 00:25:23.221 response: 00:25:23.221 { 00:25:23.221 "code": -5, 00:25:23.221 "message": "Input/output error" 00:25:23.221 } 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@33 -- # sn=782355043 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 782355043 00:25:23.221 1 links removed 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@33 -- # sn=817150000 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 817150000 00:25:23.221 1 links removed 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100129 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 100129 ']' 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 100129 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100129 00:25:23.221 killing process with pid 100129 00:25:23.221 Received shutdown signal, test time was about 1.000000 seconds 00:25:23.221 00:25:23.221 Latency(us) 00:25:23.221 [2024-11-26T16:30:48.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.221 [2024-11-26T16:30:48.874Z] =================================================================================================================== 00:25:23.221 [2024-11-26T16:30:48.874Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100129' 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@973 -- # kill 100129 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@978 -- # wait 100129 00:25:23.221 16:30:48 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100118 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 100118 ']' 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 100118 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:23.221 16:30:48 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100118 00:25:23.525 killing process with pid 100118 00:25:23.525 16:30:48 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:23.525 16:30:48 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:23.525 16:30:48 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100118' 00:25:23.525 16:30:48 keyring_linux -- common/autotest_common.sh@973 -- # kill 100118 00:25:23.525 16:30:48 keyring_linux -- common/autotest_common.sh@978 -- # wait 100118 00:25:23.525 00:25:23.525 real 0m5.485s 00:25:23.525 user 0m11.419s 00:25:23.525 sys 0m1.307s 00:25:23.525 16:30:49 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:23.525 16:30:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:23.525 ************************************ 00:25:23.525 END TEST keyring_linux 00:25:23.525 ************************************ 00:25:23.525 16:30:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:23.525 16:30:49 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:23.525 16:30:49 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:23.525 16:30:49 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:23.525 16:30:49 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:23.525 16:30:49 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:23.525 16:30:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:23.525 16:30:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:23.525 16:30:49 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:25:23.525 16:30:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:23.525 16:30:49 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:25:23.525 16:30:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:23.525 16:30:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:23.525 16:30:49 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:25:23.525 16:30:49 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:25:23.525 16:30:49 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:25:23.525 16:30:49 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:25:23.525 16:30:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:23.525 16:30:49 -- common/autotest_common.sh@10 -- # set +x 00:25:23.525 16:30:49 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:25:23.525 16:30:49 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:25:23.525 16:30:49 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:25:23.525 16:30:49 -- common/autotest_common.sh@10 -- # set +x 00:25:25.468 INFO: APP EXITING 00:25:25.468 INFO: killing all VMs 00:25:25.468 INFO: killing vhost app 00:25:25.468 INFO: EXIT DONE 00:25:26.038 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:26.298 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:26.298 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:26.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:26.867 Cleaning 00:25:26.867 Removing: /var/run/dpdk/spdk0/config 00:25:26.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:26.867 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:27.126 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:27.126 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:27.126 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:27.126 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:27.126 Removing: /var/run/dpdk/spdk1/config 00:25:27.126 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:27.126 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:27.126 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:27.126 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:27.126 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:27.126 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:27.126 Removing: /var/run/dpdk/spdk2/config 00:25:27.126 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:27.126 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:27.126 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:27.126 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:27.126 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:27.126 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:27.126 Removing: /var/run/dpdk/spdk3/config 00:25:27.126 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:27.126 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:27.126 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:27.126 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:27.126 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:27.126 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:27.126 Removing: /var/run/dpdk/spdk4/config 00:25:27.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:27.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:27.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:27.127 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:27.127 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:27.127 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:27.127 Removing: /dev/shm/nvmf_trace.0 00:25:27.127 Removing: /dev/shm/spdk_tgt_trace.pid68969 00:25:27.127 Removing: /var/run/dpdk/spdk0 00:25:27.127 Removing: /var/run/dpdk/spdk1 00:25:27.127 Removing: /var/run/dpdk/spdk2 00:25:27.127 Removing: /var/run/dpdk/spdk3 00:25:27.127 Removing: /var/run/dpdk/spdk4 00:25:27.127 Removing: /var/run/dpdk/spdk_pid100118 00:25:27.127 Removing: /var/run/dpdk/spdk_pid100129 00:25:27.127 Removing: /var/run/dpdk/spdk_pid68822 00:25:27.127 Removing: /var/run/dpdk/spdk_pid68969 00:25:27.127 Removing: /var/run/dpdk/spdk_pid69162 00:25:27.127 Removing: /var/run/dpdk/spdk_pid69249 00:25:27.127 Removing: /var/run/dpdk/spdk_pid69263 00:25:27.127 Removing: /var/run/dpdk/spdk_pid69373 00:25:27.127 Removing: /var/run/dpdk/spdk_pid69383 00:25:27.127 Removing: /var/run/dpdk/spdk_pid69517 00:25:27.127 Removing: /var/run/dpdk/spdk_pid69713 00:25:27.127 Removing: /var/run/dpdk/spdk_pid69867 00:25:27.127 Removing: /var/run/dpdk/spdk_pid69945 00:25:27.127 Removing: /var/run/dpdk/spdk_pid70016 00:25:27.127 Removing: /var/run/dpdk/spdk_pid70109 00:25:27.127 Removing: /var/run/dpdk/spdk_pid70181 00:25:27.127 Removing: /var/run/dpdk/spdk_pid70220 00:25:27.127 Removing: /var/run/dpdk/spdk_pid70250 00:25:27.127 Removing: /var/run/dpdk/spdk_pid70319 00:25:27.127 Removing: /var/run/dpdk/spdk_pid70406 00:25:27.127 Removing: /var/run/dpdk/spdk_pid70841 00:25:27.127 Removing: /var/run/dpdk/spdk_pid70880 00:25:27.127 Removing: /var/run/dpdk/spdk_pid70918 00:25:27.127 Removing: /var/run/dpdk/spdk_pid70932 00:25:27.127 Removing: /var/run/dpdk/spdk_pid70982 00:25:27.127 Removing: /var/run/dpdk/spdk_pid70990 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71052 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71060 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71100 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71118 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71158 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71178 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71309 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71339 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71421 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71748 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71760 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71796 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71804 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71820 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71839 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71852 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71868 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71887 00:25:27.127 Removing: /var/run/dpdk/spdk_pid71900 00:25:27.387 Removing: /var/run/dpdk/spdk_pid71916 00:25:27.387 Removing: /var/run/dpdk/spdk_pid71935 00:25:27.387 Removing: /var/run/dpdk/spdk_pid71943 00:25:27.387 Removing: /var/run/dpdk/spdk_pid71958 00:25:27.387 Removing: /var/run/dpdk/spdk_pid71977 00:25:27.387 Removing: /var/run/dpdk/spdk_pid71991 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72001 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72020 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72033 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72049 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72079 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72093 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72122 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72189 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72217 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72227 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72254 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72265 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72267 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72309 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72323 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72346 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72361 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72365 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72373 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72384 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72388 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72397 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72407 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72430 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72462 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72466 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72493 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72504 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72506 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72552 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72558 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72590 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72592 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72594 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72607 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72609 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72614 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72627 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72629 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72711 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72753 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72860 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72896 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72941 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72961 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72972 00:25:27.387 Removing: /var/run/dpdk/spdk_pid72992 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73024 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73039 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73117 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73128 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73166 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73227 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73273 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73297 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73391 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73439 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73466 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73698 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73784 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73813 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73837 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73876 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73904 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73943 00:25:27.387 Removing: /var/run/dpdk/spdk_pid73969 00:25:27.387 Removing: /var/run/dpdk/spdk_pid74350 00:25:27.387 Removing: /var/run/dpdk/spdk_pid74390 00:25:27.387 Removing: /var/run/dpdk/spdk_pid74724 00:25:27.387 Removing: /var/run/dpdk/spdk_pid75170 00:25:27.387 Removing: /var/run/dpdk/spdk_pid75445 00:25:27.387 Removing: /var/run/dpdk/spdk_pid76275 00:25:27.387 Removing: /var/run/dpdk/spdk_pid77169 00:25:27.387 Removing: /var/run/dpdk/spdk_pid77292 00:25:27.387 Removing: /var/run/dpdk/spdk_pid77354 00:25:27.387 Removing: /var/run/dpdk/spdk_pid78755 00:25:27.387 Removing: /var/run/dpdk/spdk_pid79063 00:25:27.647 Removing: /var/run/dpdk/spdk_pid82790 00:25:27.647 Removing: /var/run/dpdk/spdk_pid83137 00:25:27.647 Removing: /var/run/dpdk/spdk_pid83247 00:25:27.647 Removing: /var/run/dpdk/spdk_pid83380 00:25:27.647 Removing: /var/run/dpdk/spdk_pid83401 00:25:27.647 Removing: /var/run/dpdk/spdk_pid83422 00:25:27.647 Removing: /var/run/dpdk/spdk_pid83443 00:25:27.647 Removing: /var/run/dpdk/spdk_pid83522 00:25:27.647 Removing: /var/run/dpdk/spdk_pid83650 00:25:27.647 Removing: /var/run/dpdk/spdk_pid83786 00:25:27.647 Removing: /var/run/dpdk/spdk_pid83873 00:25:27.647 Removing: /var/run/dpdk/spdk_pid84067 00:25:27.647 Removing: /var/run/dpdk/spdk_pid84130 00:25:27.647 Removing: /var/run/dpdk/spdk_pid84215 00:25:27.647 Removing: /var/run/dpdk/spdk_pid84565 00:25:27.647 Removing: /var/run/dpdk/spdk_pid84974 00:25:27.647 Removing: /var/run/dpdk/spdk_pid84975 00:25:27.647 Removing: /var/run/dpdk/spdk_pid84976 00:25:27.647 Removing: /var/run/dpdk/spdk_pid85238 00:25:27.647 Removing: /var/run/dpdk/spdk_pid85474 00:25:27.647 Removing: /var/run/dpdk/spdk_pid85476 00:25:27.647 Removing: /var/run/dpdk/spdk_pid87792 00:25:27.647 Removing: /var/run/dpdk/spdk_pid88175 00:25:27.647 Removing: /var/run/dpdk/spdk_pid88181 00:25:27.647 Removing: /var/run/dpdk/spdk_pid88508 00:25:27.647 Removing: /var/run/dpdk/spdk_pid88522 00:25:27.647 Removing: /var/run/dpdk/spdk_pid88542 00:25:27.647 Removing: /var/run/dpdk/spdk_pid88569 00:25:27.647 Removing: /var/run/dpdk/spdk_pid88579 00:25:27.647 Removing: /var/run/dpdk/spdk_pid88667 00:25:27.647 Removing: /var/run/dpdk/spdk_pid88673 00:25:27.647 Removing: /var/run/dpdk/spdk_pid88777 00:25:27.647 Removing: /var/run/dpdk/spdk_pid88784 00:25:27.647 Removing: /var/run/dpdk/spdk_pid88893 00:25:27.647 Removing: /var/run/dpdk/spdk_pid88899 00:25:27.647 Removing: /var/run/dpdk/spdk_pid89349 00:25:27.647 Removing: /var/run/dpdk/spdk_pid89398 00:25:27.647 Removing: /var/run/dpdk/spdk_pid89502 00:25:27.647 Removing: /var/run/dpdk/spdk_pid89585 00:25:27.647 Removing: /var/run/dpdk/spdk_pid89926 00:25:27.647 Removing: /var/run/dpdk/spdk_pid90123 00:25:27.647 Removing: /var/run/dpdk/spdk_pid90531 00:25:27.647 Removing: /var/run/dpdk/spdk_pid91088 00:25:27.647 Removing: /var/run/dpdk/spdk_pid91928 00:25:27.647 Removing: /var/run/dpdk/spdk_pid92568 00:25:27.647 Removing: /var/run/dpdk/spdk_pid92570 00:25:27.647 Removing: /var/run/dpdk/spdk_pid94574 00:25:27.647 Removing: /var/run/dpdk/spdk_pid94621 00:25:27.647 Removing: /var/run/dpdk/spdk_pid94668 00:25:27.647 Removing: /var/run/dpdk/spdk_pid94722 00:25:27.647 Removing: /var/run/dpdk/spdk_pid94824 00:25:27.647 Removing: /var/run/dpdk/spdk_pid94877 00:25:27.647 Removing: /var/run/dpdk/spdk_pid94924 00:25:27.647 Removing: /var/run/dpdk/spdk_pid94977 00:25:27.647 Removing: /var/run/dpdk/spdk_pid95323 00:25:27.647 Removing: /var/run/dpdk/spdk_pid96528 00:25:27.647 Removing: /var/run/dpdk/spdk_pid96666 00:25:27.647 Removing: /var/run/dpdk/spdk_pid96903 00:25:27.647 Removing: /var/run/dpdk/spdk_pid97498 00:25:27.647 Removing: /var/run/dpdk/spdk_pid97652 00:25:27.647 Removing: /var/run/dpdk/spdk_pid97809 00:25:27.647 Removing: /var/run/dpdk/spdk_pid97905 00:25:27.647 Removing: /var/run/dpdk/spdk_pid98069 00:25:27.647 Removing: /var/run/dpdk/spdk_pid98171 00:25:27.647 Removing: /var/run/dpdk/spdk_pid98885 00:25:27.647 Removing: /var/run/dpdk/spdk_pid98915 00:25:27.647 Removing: /var/run/dpdk/spdk_pid98956 00:25:27.647 Removing: /var/run/dpdk/spdk_pid99203 00:25:27.647 Removing: /var/run/dpdk/spdk_pid99234 00:25:27.647 Removing: /var/run/dpdk/spdk_pid99268 00:25:27.647 Removing: /var/run/dpdk/spdk_pid99741 00:25:27.647 Removing: /var/run/dpdk/spdk_pid99750 00:25:27.647 Removing: /var/run/dpdk/spdk_pid99996 00:25:27.647 Clean 00:25:27.906 16:30:53 -- common/autotest_common.sh@1453 -- # return 0 00:25:27.906 16:30:53 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:25:27.906 16:30:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:27.906 16:30:53 -- common/autotest_common.sh@10 -- # set +x 00:25:27.906 16:30:53 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:25:27.906 16:30:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:27.906 16:30:53 -- common/autotest_common.sh@10 -- # set +x 00:25:27.906 16:30:53 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:27.906 16:30:53 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:27.906 16:30:53 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:27.906 16:30:53 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:25:27.906 16:30:53 -- spdk/autotest.sh@398 -- # hostname 00:25:27.906 16:30:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:28.165 geninfo: WARNING: invalid characters removed from testname! 00:25:50.106 16:31:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:53.398 16:31:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:55.302 16:31:20 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:57.835 16:31:23 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:00.373 16:31:25 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:02.904 16:31:28 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:05.437 16:31:30 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:05.437 16:31:30 -- spdk/autorun.sh@1 -- $ timing_finish 00:26:05.437 16:31:30 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:26:05.437 16:31:30 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:05.437 16:31:30 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:26:05.437 16:31:30 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:05.437 + [[ -n 5998 ]] 00:26:05.437 + sudo kill 5998 00:26:05.447 [Pipeline] } 00:26:05.462 [Pipeline] // timeout 00:26:05.466 [Pipeline] } 00:26:05.479 [Pipeline] // stage 00:26:05.484 [Pipeline] } 00:26:05.497 [Pipeline] // catchError 00:26:05.505 [Pipeline] stage 00:26:05.508 [Pipeline] { (Stop VM) 00:26:05.518 [Pipeline] sh 00:26:05.796 + vagrant halt 00:26:09.084 ==> default: Halting domain... 00:26:15.666 [Pipeline] sh 00:26:15.951 + vagrant destroy -f 00:26:18.500 ==> default: Removing domain... 00:26:18.828 [Pipeline] sh 00:26:19.104 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:26:19.112 [Pipeline] } 00:26:19.127 [Pipeline] // stage 00:26:19.133 [Pipeline] } 00:26:19.147 [Pipeline] // dir 00:26:19.152 [Pipeline] } 00:26:19.167 [Pipeline] // wrap 00:26:19.174 [Pipeline] } 00:26:19.186 [Pipeline] // catchError 00:26:19.195 [Pipeline] stage 00:26:19.197 [Pipeline] { (Epilogue) 00:26:19.210 [Pipeline] sh 00:26:19.490 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:24.773 [Pipeline] catchError 00:26:24.775 [Pipeline] { 00:26:24.788 [Pipeline] sh 00:26:25.070 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:25.329 Artifacts sizes are good 00:26:25.339 [Pipeline] } 00:26:25.353 [Pipeline] // catchError 00:26:25.364 [Pipeline] archiveArtifacts 00:26:25.370 Archiving artifacts 00:26:25.499 [Pipeline] cleanWs 00:26:25.511 [WS-CLEANUP] Deleting project workspace... 00:26:25.511 [WS-CLEANUP] Deferred wipeout is used... 00:26:25.518 [WS-CLEANUP] done 00:26:25.520 [Pipeline] } 00:26:25.535 [Pipeline] // stage 00:26:25.540 [Pipeline] } 00:26:25.555 [Pipeline] // node 00:26:25.561 [Pipeline] End of Pipeline 00:26:25.604 Finished: SUCCESS